Thursday, April 30, 2009
Wednesday, April 29, 2009
Seven Surefire Ways to Botch a Job Interview
In prior lives, as a hiring manager, I've interviewed scores of job applicants. And in the course of interviews, I've seen (how shall I say?) certain recurrent antipatterns of behavior that are usually a pretty good tipoff that the person in question "isn't right for the job."
Here are seven things I don't want to see during an interview. Committing only one or two of these transgressions might not cost you the job, but if a pattern starts to emerge, believe me, I'll notice it; and you won't be asked back.
1. Be late.
This indicates lack of commitment to deadlines. Arriving 10 to 15 minutes ahead of time is at least a small clue that you know how to underpromise and overdeliver. If you got stuck in traffic, that's fine, it doesn't penalize you (especially if you call ahead to let someone know you'll be late). Otherwise? Don't waste your hiring manager's time. Don't be late.
2. Be unprepared.
Did you leave samples of prior work at home? (Yes, I can look them up online, but it's a nice courtesy to be offered hard copies of previous work, whether printed or on CD, DVD, flash drive, etc.) Did you forget to bring an extra copy of your resume? Again, these sorts of small details aren't going to be a showstopper in and of themselves, but taken together with other items in this list, it can reveal a pattern of inattention to "little things." Sustained inattention to "little things" kills a business. Don't act like you don't know that.
3. Avoid eye direct eye contact.
If you can't look me in the eye when answering questions, I'm going to get the impression, subconsciously at least, that you're hiding something, or that you're ashamed of something, anxious to leave, easily distracted by your surroundings, etc. (or that you just don't like me). Stay focused. I'm your center of attention. Look me in the eye.
4. Say bad things about a previous employer, or be unable to explain why you left a previous job.
If I'm interviewing you, rest assured, I am going to ask you about your previous work experience. That means I'll definitely ask why you left your previous jobs (yes, all of them). Be careful how you answer. If you left a job because a previous employer treated people poorly, provide concrete details; explain the exact circumstances. But be careful. If you say something negative about a previous job, a previous manager, or a company, I'll assume that you may someday say something negative about me or my company. I'll question your loyalty before you even begin working. That's not good.
5. Fail to ask good questions about the job.
If you're seriously interested in the job, you'll have questions. By all means, ask! I want to know what's important to you (work conditions? people? hours? pay? the quality and nature of the assignments?), and I'll get some indication of that in the kind of questions you choose to ask. Plus, asking questions shows that you're inquisitive, thoughtful, and not merely interested in superficial matters -- or just being employed again.
6. Ask a lot of questions about flextime, days off, bonus plan, stock options, and job perks (and show concerns around how much overtime you might need to work).
I need somebody who's a hard worker and committed to helping a team meet difficult deadlines. Don't make me think you're focused on not working hard. It's okay to ask questions about perks and benefits (it's expected, actually), but save them until the end and for gosh sakes, don't make it look like perks, benefits, and compensation are near the top of your list of priorities. I'll wonder about your work ethic.
7. Come to the interview not having gone to the company's web site and not knowing a thing about the company.
Before coming to an interview, do a little homework. Visit the company web site (be prepared to critique it later, if asked), learn the company's history, and try to understand the company's positioning in the market and current strategies. I want to know that you're self-motivated, able to do a little research on your own, and keenly interested in this particular job, at this particular company. If you come to the interview not knowing what the company does, it shows me you don't care about the big picture. Maybe you don't care about anything. Maybe you're just plain lazy. Next.
There are plenty more ways to show an interviewer that you aren't the right person for the job, but these are a few of my favorites. And yes, I've interviewed candidates who flunked on all counts. It's amazing how many job candidates come to an interview well dressed but unprepared, unaware of what the company does, unable to ask questions that aren't related to perks and benefits, and unable to say good things about prior employers.
I want to know that you're a hard worker and a highly focused, self-motivated individual who is detail-oriented, yet also tries to understand the big picture. Is that so much to ask?
Here are seven things I don't want to see during an interview. Committing only one or two of these transgressions might not cost you the job, but if a pattern starts to emerge, believe me, I'll notice it; and you won't be asked back.
1. Be late.
This indicates lack of commitment to deadlines. Arriving 10 to 15 minutes ahead of time is at least a small clue that you know how to underpromise and overdeliver. If you got stuck in traffic, that's fine, it doesn't penalize you (especially if you call ahead to let someone know you'll be late). Otherwise? Don't waste your hiring manager's time. Don't be late.
2. Be unprepared.
Did you leave samples of prior work at home? (Yes, I can look them up online, but it's a nice courtesy to be offered hard copies of previous work, whether printed or on CD, DVD, flash drive, etc.) Did you forget to bring an extra copy of your resume? Again, these sorts of small details aren't going to be a showstopper in and of themselves, but taken together with other items in this list, it can reveal a pattern of inattention to "little things." Sustained inattention to "little things" kills a business. Don't act like you don't know that.
3. Avoid eye direct eye contact.
If you can't look me in the eye when answering questions, I'm going to get the impression, subconsciously at least, that you're hiding something, or that you're ashamed of something, anxious to leave, easily distracted by your surroundings, etc. (or that you just don't like me). Stay focused. I'm your center of attention. Look me in the eye.
4. Say bad things about a previous employer, or be unable to explain why you left a previous job.
If I'm interviewing you, rest assured, I am going to ask you about your previous work experience. That means I'll definitely ask why you left your previous jobs (yes, all of them). Be careful how you answer. If you left a job because a previous employer treated people poorly, provide concrete details; explain the exact circumstances. But be careful. If you say something negative about a previous job, a previous manager, or a company, I'll assume that you may someday say something negative about me or my company. I'll question your loyalty before you even begin working. That's not good.
5. Fail to ask good questions about the job.
If you're seriously interested in the job, you'll have questions. By all means, ask! I want to know what's important to you (work conditions? people? hours? pay? the quality and nature of the assignments?), and I'll get some indication of that in the kind of questions you choose to ask. Plus, asking questions shows that you're inquisitive, thoughtful, and not merely interested in superficial matters -- or just being employed again.
6. Ask a lot of questions about flextime, days off, bonus plan, stock options, and job perks (and show concerns around how much overtime you might need to work).
I need somebody who's a hard worker and committed to helping a team meet difficult deadlines. Don't make me think you're focused on not working hard. It's okay to ask questions about perks and benefits (it's expected, actually), but save them until the end and for gosh sakes, don't make it look like perks, benefits, and compensation are near the top of your list of priorities. I'll wonder about your work ethic.
7. Come to the interview not having gone to the company's web site and not knowing a thing about the company.
Before coming to an interview, do a little homework. Visit the company web site (be prepared to critique it later, if asked), learn the company's history, and try to understand the company's positioning in the market and current strategies. I want to know that you're self-motivated, able to do a little research on your own, and keenly interested in this particular job, at this particular company. If you come to the interview not knowing what the company does, it shows me you don't care about the big picture. Maybe you don't care about anything. Maybe you're just plain lazy. Next.
There are plenty more ways to show an interviewer that you aren't the right person for the job, but these are a few of my favorites. And yes, I've interviewed candidates who flunked on all counts. It's amazing how many job candidates come to an interview well dressed but unprepared, unaware of what the company does, unable to ask questions that aren't related to perks and benefits, and unable to say good things about prior employers.
I want to know that you're a hard worker and a highly focused, self-motivated individual who is detail-oriented, yet also tries to understand the big picture. Is that so much to ask?
Tuesday, April 28, 2009
How the RIA wars will affect the future of civilization
There's a war in progress, and the outcome of it will affect the future of computing. It's important to see it for what it is, so you can prepare for the consequences. The consequences (unless there's a cease-fire in the meantime) will be enormous.
To get a handle on it requires a certain appreciation for the importance of operating systems. Let's back up the truck for a minute and talk about operating systems, and Windows in particular (two different things, really).
Non-technical computer users can be forgiven, I think, for misusing the term "Operating System" in the context of Windows. Underneath Windows is an operating system, to be sure, but the collection of applications that, in the aggregate, gives Windows its Windowsness has little to do with operating systems. An operating system is really about the core-essential software that discovers and registers "devices," controls the bootstrapping of a machine's services at boot time, and provides various hosting services to applications.
From a human user's point of view, that last bit (providing hosting services to apps) is the most important aspect of an operating system. It's what makes it possible for us to run programs and get work done.
But consider what has happened over the past decade or so. The Web has become a central metaphor in computing, not just at the level of desktop PCs but also a variety of handheld (and other) devices. Initially, the Web was a world of static content: You visited a URL with a browser, the browser rendered the page, and you hopped from URL to URL via hypertext links. But now the Web is full of highly interactive "web apps," and the browser is merely an interactive hosting environment for client-server apps in which part of the logic executes locally and part executes on a server somewhere. The browser is now the logical analog of a desktop OS, in many ways -- mooting the importance of something like, say, Windows.
This has worked to Microsoft's disadvantage, obviously. When people rely more and more heavily on a browser to get work done, it tends to marginalize the importance of desktop software; and since the Web is, at its very core, standards-driven (TCP/IP, HTTP, HTML, URLs, etc.; all universally understood standards), the concept of a non-standardized, proprietary OS that doesn't understand how to interoperate with non-native software (or with other OSes) runs counter to a user's needs and actually becomes an anti-feature. When the most important thing a computer OS can do is provide connectivity to an outside world that's based on standards, the proprietary OS is a liability. In fact, the Web makes all OSes equally irrelevant, in some sense, which is one reason Apple is doing well, because the age-old Cupertino stigma of having a non-Windows-interoperable OS is no longer, in fact, a stigma. The field is (almost) level now.
To the degree that things like Windows have become sidelined in importance compared to the virtual OS of the Web+browser, rich Internet-aware (but also desktop-aware) runtime frameworks like Adobe AIR become hugely important. They represent the "next platform." And that necessarily also means the next potentially proprietary platform, because Adobe (to stay with the AIR example) is a closed-source company that still makes most of its money from proprietary, non-open software. Even if you want to make the (hollow) argument that the Flash standard is "open" and uses ActionScript ("open") and XML ("open"), etc., you still have to concede that Adobe has an iron grip over what Flash and Flex consist of, and where they're headed. The only question (let's be frank about this) is whether Adobe is a benevolent dictator or a venal, conniving one.
The battle for the Rich Internet Application platform high ground is really a proprietary platform play: the next big attempt to lock computer users into privately controlled technologies -- technologies like Flash and Flex that could be foundational to future computing. The winner (be it Adobe, Microsoft, or Oracle) will find itself with a great concentration of power in its hands. Which, if we've learned anything at all from history, is a very bad thing indeed.
So before you get too worked up about AIR, Silverlight, or JavaFX, before you drink anybody's Kool-aid and start passing the cup around, remember what you're dealing with. These technologies aren't about making the world more standards-driven or putting more control in the hands of the user. They're about putting control of the Web experience in the hands of a multibillion-dollar closed-source software giant. Choose your poison carefully.
Of course, if one of the big RIA contenders decides to go 100% open-source, and put the future of the platform (whichever one it turns out to be) completely under community governance, then we have nothing to fear; we have a democracy. But don't hold your breath.
To get a handle on it requires a certain appreciation for the importance of operating systems. Let's back up the truck for a minute and talk about operating systems, and Windows in particular (two different things, really).
Non-technical computer users can be forgiven, I think, for misusing the term "Operating System" in the context of Windows. Underneath Windows is an operating system, to be sure, but the collection of applications that, in the aggregate, gives Windows its Windowsness has little to do with operating systems. An operating system is really about the core-essential software that discovers and registers "devices," controls the bootstrapping of a machine's services at boot time, and provides various hosting services to applications.
From a human user's point of view, that last bit (providing hosting services to apps) is the most important aspect of an operating system. It's what makes it possible for us to run programs and get work done.
But consider what has happened over the past decade or so. The Web has become a central metaphor in computing, not just at the level of desktop PCs but also a variety of handheld (and other) devices. Initially, the Web was a world of static content: You visited a URL with a browser, the browser rendered the page, and you hopped from URL to URL via hypertext links. But now the Web is full of highly interactive "web apps," and the browser is merely an interactive hosting environment for client-server apps in which part of the logic executes locally and part executes on a server somewhere. The browser is now the logical analog of a desktop OS, in many ways -- mooting the importance of something like, say, Windows.
This has worked to Microsoft's disadvantage, obviously. When people rely more and more heavily on a browser to get work done, it tends to marginalize the importance of desktop software; and since the Web is, at its very core, standards-driven (TCP/IP, HTTP, HTML, URLs, etc.; all universally understood standards), the concept of a non-standardized, proprietary OS that doesn't understand how to interoperate with non-native software (or with other OSes) runs counter to a user's needs and actually becomes an anti-feature. When the most important thing a computer OS can do is provide connectivity to an outside world that's based on standards, the proprietary OS is a liability. In fact, the Web makes all OSes equally irrelevant, in some sense, which is one reason Apple is doing well, because the age-old Cupertino stigma of having a non-Windows-interoperable OS is no longer, in fact, a stigma. The field is (almost) level now.
To the degree that things like Windows have become sidelined in importance compared to the virtual OS of the Web+browser, rich Internet-aware (but also desktop-aware) runtime frameworks like Adobe AIR become hugely important. They represent the "next platform." And that necessarily also means the next potentially proprietary platform, because Adobe (to stay with the AIR example) is a closed-source company that still makes most of its money from proprietary, non-open software. Even if you want to make the (hollow) argument that the Flash standard is "open" and uses ActionScript ("open") and XML ("open"), etc., you still have to concede that Adobe has an iron grip over what Flash and Flex consist of, and where they're headed. The only question (let's be frank about this) is whether Adobe is a benevolent dictator or a venal, conniving one.
The battle for the Rich Internet Application platform high ground is really a proprietary platform play: the next big attempt to lock computer users into privately controlled technologies -- technologies like Flash and Flex that could be foundational to future computing. The winner (be it Adobe, Microsoft, or Oracle) will find itself with a great concentration of power in its hands. Which, if we've learned anything at all from history, is a very bad thing indeed.
So before you get too worked up about AIR, Silverlight, or JavaFX, before you drink anybody's Kool-aid and start passing the cup around, remember what you're dealing with. These technologies aren't about making the world more standards-driven or putting more control in the hands of the user. They're about putting control of the Web experience in the hands of a multibillion-dollar closed-source software giant. Choose your poison carefully.
Of course, if one of the big RIA contenders decides to go 100% open-source, and put the future of the platform (whichever one it turns out to be) completely under community governance, then we have nothing to fear; we have a democracy. But don't hold your breath.
Monday, April 27, 2009
Two techniques for faster JavaScript
I like things that go fast, and that includes code that runs fast. With JavaScript (and Java, too), that can be a challenge. So much the better, though. I like challenges too.
When someone asks me what's the single best way to speed up "a slow script," naturally I want to know what the script is spending most of its time doing. In browser scripting, it's typical that a "slow" script operation either involves tedious string parsing of some kind, or DOM operations. That's if you don't count programmer-insanity sorts of things, like creating a regular expression object over and over again in a loop.
The two most important pieces of advice I can give on speeding up browser scripts, then, are:
1. Never hand-parse a string.
2. Don't do DOM operations in loops (and in general, don't do DOM operations!).
No. 1 means don't do things like crawl a big long string using indexOf( ) to tokenize-as-you-go. Instead, use replace( ) or a split( )/join( ) technique, or some other technique that will basically have the effect of moving the loop into a C++ native routine inside the interpreter. (The general approach is discussed in a previous post.) An example would be hit-highlighting in a long run of text. Don't step through the text looking for the term(s) in question; use replace( ).
No. 2 means to avoid looping over the return values from getElementsByTagName( ) -- in fact, don't call it unless you have to -- and get away from doing a lot of createElement( ), appendChild( ) types of things, especially in loops, and especially in functions that get called a lot (such as event handlers for mouse movements). How? Use innerHTML wherever possible. In other words, create your "nodes" as Strings (markup), then slam the final string into the DOM at the last minute by setting the parent node's innerHTML to that value. This moves all the DOM reconfiguring into the browser's native DOM routines, which it'll happen at the speed of compiled C++. Don't sit there and rebuild the DOM yourself, brick by brick, in JavaScript, unless you have to, which you seldom do.
There are other techniques for avoiding big speedups, but they're more situational. And I'm still learning, of course. I'm still trying to find out what all the lazily-invoked "big speed hit" operations are in Gecko that can suddenly be triggered by scripts. The situational speed hits can sometimes be addressed through caching of expensive objects, or reuse of expensive results (a technique known as memoization; good article here). The Mozilla folks have put a lot of work into speeding up the JavaScript runtimes, but remember, the fastest runtime environment in the world can be brought to its knees by poor choice of algorithms.
Obviously it's not always possible to employ the two techniques mentioned above, and in certain cases the performance gain is not impressive. But in general, these remain underutilized techniques (from what I can tell), which is why I bring them up here.
If you have additional techniques for speeding up JavaScript, by all means, leave a comment. I'm interested in hearing your experiences.
When someone asks me what's the single best way to speed up "a slow script," naturally I want to know what the script is spending most of its time doing. In browser scripting, it's typical that a "slow" script operation either involves tedious string parsing of some kind, or DOM operations. That's if you don't count programmer-insanity sorts of things, like creating a regular expression object over and over again in a loop.
The two most important pieces of advice I can give on speeding up browser scripts, then, are:
1. Never hand-parse a string.
2. Don't do DOM operations in loops (and in general, don't do DOM operations!).
No. 1 means don't do things like crawl a big long string using indexOf( ) to tokenize-as-you-go. Instead, use replace( ) or a split( )/join( ) technique, or some other technique that will basically have the effect of moving the loop into a C++ native routine inside the interpreter. (The general approach is discussed in a previous post.) An example would be hit-highlighting in a long run of text. Don't step through the text looking for the term(s) in question; use replace( ).
No. 2 means to avoid looping over the return values from getElementsByTagName( ) -- in fact, don't call it unless you have to -- and get away from doing a lot of createElement( ), appendChild( ) types of things, especially in loops, and especially in functions that get called a lot (such as event handlers for mouse movements). How? Use innerHTML wherever possible. In other words, create your "nodes" as Strings (markup), then slam the final string into the DOM at the last minute by setting the parent node's innerHTML to that value. This moves all the DOM reconfiguring into the browser's native DOM routines, which it'll happen at the speed of compiled C++. Don't sit there and rebuild the DOM yourself, brick by brick, in JavaScript, unless you have to, which you seldom do.
There are other techniques for avoiding big speedups, but they're more situational. And I'm still learning, of course. I'm still trying to find out what all the lazily-invoked "big speed hit" operations are in Gecko that can suddenly be triggered by scripts. The situational speed hits can sometimes be addressed through caching of expensive objects, or reuse of expensive results (a technique known as memoization; good article here). The Mozilla folks have put a lot of work into speeding up the JavaScript runtimes, but remember, the fastest runtime environment in the world can be brought to its knees by poor choice of algorithms.
Obviously it's not always possible to employ the two techniques mentioned above, and in certain cases the performance gain is not impressive. But in general, these remain underutilized techniques (from what I can tell), which is why I bring them up here.
If you have additional techniques for speeding up JavaScript, by all means, leave a comment. I'm interested in hearing your experiences.
Saturday, April 25, 2009
Can you pass this JavaScript test?
Think you know JavaScript? Try the following quick quiz. Guess what each expression evaluates to. (Answers given at the end.)
1. ++Math.PI
2. (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
3. typeof NaN
4. typeof typeof undefined
5. a = {null:null}; typeof a.null;
6. a = "5"; b = "2"; c = a * b;
7. a = "5"; b = 2; c = a+++b;
8. isNaN(1/null)
9. (16).toString(16)
10. 016 * 2
11. ~null
12. "ab c".match(/\b\w\b/)
This isn't a tutorial, so I'm not going to explain each answer individually. If you missed any, I suggest while (!enlightenment()) meditate();
The answers:
1. 4.141592653589793
2. false
3. "number"
4. "string"
5. "object"
6. 10
7. 7
8. false
9. 10
10. 28
11. -1
12. [ "c" ]
For people who work with JavaScript more than occasionally, I would score as follows:
(correct answers: score)
5 - 7: KNOWLEDGEABLE
8 - 10: EXPERT
11: SAVANT
12: MASTER OF THE UNIVERSE
A few quick comments.
The answer to No. 2 is the same for JavaScript as for Java (or any other language that uses IEEE 754 floating point numbers), and it's one reason why you shouldn't use floating point arithmetic in any serious application involving monetary values. Floating-point addition is not associative. Neither is float multiplication. There's an interesting overview here.
No. 6: In an arithmetic expression involving multiplication, division, and/or subtraction, if the expression contains one or more strings, the interpreter will try to cast the strings to numbers first. If the arithmetic expression involves addition, however, all terms will be cast to strings.
No. 7: The evaluation order in JavaScript (as in Java and C) is left-to-right, so what you've got here is "a, post-incremented, plus b," not "a plus pre-incremented b."
No. 9: toString( ) takes a numeric argument (optionally, of course). An argument of "16" means base-16, hence the returned string is a hex representation of 16, which is "10." If you write .toString(2), you get a binary representation of the number, etc.
No. 10: 016 is octal notation for 14 decimal. Interestingly, though, the interpreter will treat "016" (in string form) as base-ten if you multiply it by one.
Don't feel bad if you didn't do well on this quiz, because almost every question was a trick question (obviously), and let's face it, trick questions suck. By the same token, if you did well on a test that sucks, don't pat yourself on the back too hard. It just means you're a little bit geekier than any human being probably should be.
1. ++Math.PI
2. (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
3. typeof NaN
4. typeof typeof undefined
5. a = {null:null}; typeof a.null;
6. a = "5"; b = "2"; c = a * b;
7. a = "5"; b = 2; c = a+++b;
8. isNaN(1/null)
9. (16).toString(16)
10. 016 * 2
11. ~null
12. "ab c".match(/\b\w\b/)
This isn't a tutorial, so I'm not going to explain each answer individually. If you missed any, I suggest while (!enlightenment()) meditate();
The answers:
1. 4.141592653589793
2. false
3. "number"
4. "string"
5. "object"
6. 10
7. 7
8. false
9. 10
10. 28
11. -1
12. [ "c" ]
For people who work with JavaScript more than occasionally, I would score as follows:
(correct answers: score)
5 - 7: KNOWLEDGEABLE
8 - 10: EXPERT
11: SAVANT
12: MASTER OF THE UNIVERSE
A few quick comments.
The answer to No. 2 is the same for JavaScript as for Java (or any other language that uses IEEE 754 floating point numbers), and it's one reason why you shouldn't use floating point arithmetic in any serious application involving monetary values. Floating-point addition is not associative. Neither is float multiplication. There's an interesting overview here.
No. 6: In an arithmetic expression involving multiplication, division, and/or subtraction, if the expression contains one or more strings, the interpreter will try to cast the strings to numbers first. If the arithmetic expression involves addition, however, all terms will be cast to strings.
No. 7: The evaluation order in JavaScript (as in Java and C) is left-to-right, so what you've got here is "a, post-incremented, plus b," not "a plus pre-incremented b."
No. 9: toString( ) takes a numeric argument (optionally, of course). An argument of "16" means base-16, hence the returned string is a hex representation of 16, which is "10." If you write .toString(2), you get a binary representation of the number, etc.
No. 10: 016 is octal notation for 14 decimal. Interestingly, though, the interpreter will treat "016" (in string form) as base-ten if you multiply it by one.
Don't feel bad if you didn't do well on this quiz, because almost every question was a trick question (obviously), and let's face it, trick questions suck. By the same token, if you did well on a test that sucks, don't pat yourself on the back too hard. It just means you're a little bit geekier than any human being probably should be.
Friday, April 24, 2009
Automatic Update Hell Must End
I recently stopped using anti-virus software. People think I'm crazy. But I'm not. It's about getting out of Automatic Update Hell.
And BTW, it's been a year now and my machines (Win XP and Vista) haven't been overtaken by the Bogeyman, because I don't practice the PC equivalent of unsafe sex. I'm not in the habit of opening e-mail attachments from people I don't know, clicking links in e-mails that have "Viagra" in the subject line, etc. I don't download games, wallpapers, screensavers, utilities I haven't heard of, crackz, hackz, or any of the other stupid-idiotware that can get you in trouble. I sure as hell don't run Internet Exploder, and guess what? I have a firewall, and a brain, and I know how to use them. (So Symantec, read my finger.)
Uninstalling Norton anti-virus software is extremely difficult, it turns out -- more difficult than uninstalling the malware it supposedly protects you against. But once it's gone from your machine, the hard-disk thrashing stops, the sudden CPU-spiking disappears, and the telltale sluggishness that accompanies a background download of the latest patch(es) vanishes.
Also, without a virus-scan of every document you open, the whole machine feels faster. Things like EditLive! and other applets load twice as fast. Zip archives open faster, etc. Sure, you can achieve this by turning off Norton's file-scan feature. But that's my point: Why are you buying software that you turn off?
So merely by getting rid of pointless anti-virus lockin-ware, I've scored a useful speedup and probably doubled the life of my hard drive. But I'm not totally out of Hell yet. There's still Microsoft to deal with.
Turning off Automatic Updates is one of the best things I've ever done to achieve better machine performance. Installing updates from Microsoft has always brought some kind of speed hit, somewhere, and sometimes brings new annoyances (new security dialogs that have to be turned off).
I'm very glad to be rid of Automatic Updates.
Sun's automatic Java updates are another painful annoyance. Again, though, you can turn this off fairly easily. But every time you manually upgrade your JDK, it seems Sun re-enables automatic Java updates. So you end up turning them off again.
But even after you get rid of Norton lockin-ware, disable Windows updates, and shut Sun the hell up, you're still not out of Hell yet, because there's yet another offender on your machine, a stealth daemon from Hades that sucks bandwidth needlessly while putting your hard drive through a rigorous TTD (test-to-destruction) regimen. I am talking, of course, about Adobe and its pernicious suite of updaters.
There's a famous line in Ace Ventura that makes me smile every time I hear it: "Dan Marino should die of gonorrhea and rot in hell." I would like to repurpose this statement somehow, except that corporations can't die of gonorrhea (any more than anyone else can), so Adobe, all I can say is: enough with the updates.
I can't think of a worse impediment to the widespread adoption of Adobe AIR than this:
I've seen this dialog far too many times this year already. It makes me want to empty a full clip of copper-jacketed hollowpoints into my machine. What is so defective about AIR that I have to update it every other time I fire up Yammer? (For that matter, what's so hopelessly broken about Yammer that I have to update it five times a week?)
Enough ranting. All rants should end at some point, and be followed by a constructive proposal aimed at solving the problem(s) in question.
So let us ask: What, if anything, should software vendors do about all this?
I can suggest a few things.
First, software updates should be opt-in by default, never the reverse.
Second: A vendor should never silently turn automatic updates back on after the user has turned them off.
Third: Give me some granularity as to what type of updates I want to receive. There are three basic types of updates: Security patches, bug fixes, and enhancements. I rarely want all three. Within those three, there are (or should be) several levels of criticality to choose from. I may want security fixes that are critical, but not those that are merely nice for grandma to have. Let me choose.
Fourth: Don't ever, ever make a user reboot the machine.
Fifth: Let me have the option, stupid as it sounds, of checking for updates at an interval of my choosing. Not just "daily, weekly, or monthly." Let me actually specify a date (e.g., December 25) on which to check for updates and receive them all in a huge, bandwidth-choking download that utterly shuts me out of the machine for 24 hours instead of torturing me daily, throughout the year, with paper cuts.
Sixth: Write better software. Don't let so many security vulnerabilities go into distribution in the first place. Open-source as many pieces of your code as possible so the community can find security flaws before ordinary users do. Don't make the user do your security-QA.
Microsoft, Sun, (Oracle), Adobe, are you listening?
And BTW, it's been a year now and my machines (Win XP and Vista) haven't been overtaken by the Bogeyman, because I don't practice the PC equivalent of unsafe sex. I'm not in the habit of opening e-mail attachments from people I don't know, clicking links in e-mails that have "Viagra" in the subject line, etc. I don't download games, wallpapers, screensavers, utilities I haven't heard of, crackz, hackz, or any of the other stupid-idiotware that can get you in trouble. I sure as hell don't run Internet Exploder, and guess what? I have a firewall, and a brain, and I know how to use them. (So Symantec, read my finger.)
Uninstalling Norton anti-virus software is extremely difficult, it turns out -- more difficult than uninstalling the malware it supposedly protects you against. But once it's gone from your machine, the hard-disk thrashing stops, the sudden CPU-spiking disappears, and the telltale sluggishness that accompanies a background download of the latest patch(es) vanishes.
Also, without a virus-scan of every document you open, the whole machine feels faster. Things like EditLive! and other applets load twice as fast. Zip archives open faster, etc. Sure, you can achieve this by turning off Norton's file-scan feature. But that's my point: Why are you buying software that you turn off?
So merely by getting rid of pointless anti-virus lockin-ware, I've scored a useful speedup and probably doubled the life of my hard drive. But I'm not totally out of Hell yet. There's still Microsoft to deal with.
Turning off Automatic Updates is one of the best things I've ever done to achieve better machine performance. Installing updates from Microsoft has always brought some kind of speed hit, somewhere, and sometimes brings new annoyances (new security dialogs that have to be turned off).
I'm very glad to be rid of Automatic Updates.
Sun's automatic Java updates are another painful annoyance. Again, though, you can turn this off fairly easily. But every time you manually upgrade your JDK, it seems Sun re-enables automatic Java updates. So you end up turning them off again.
But even after you get rid of Norton lockin-ware, disable Windows updates, and shut Sun the hell up, you're still not out of Hell yet, because there's yet another offender on your machine, a stealth daemon from Hades that sucks bandwidth needlessly while putting your hard drive through a rigorous TTD (test-to-destruction) regimen. I am talking, of course, about Adobe and its pernicious suite of updaters.
There's a famous line in Ace Ventura that makes me smile every time I hear it: "Dan Marino should die of gonorrhea and rot in hell." I would like to repurpose this statement somehow, except that corporations can't die of gonorrhea (any more than anyone else can), so Adobe, all I can say is: enough with the updates.
I can't think of a worse impediment to the widespread adoption of Adobe AIR than this:
I've seen this dialog far too many times this year already. It makes me want to empty a full clip of copper-jacketed hollowpoints into my machine. What is so defective about AIR that I have to update it every other time I fire up Yammer? (For that matter, what's so hopelessly broken about Yammer that I have to update it five times a week?)
Enough ranting. All rants should end at some point, and be followed by a constructive proposal aimed at solving the problem(s) in question.
So let us ask: What, if anything, should software vendors do about all this?
I can suggest a few things.
First, software updates should be opt-in by default, never the reverse.
Second: A vendor should never silently turn automatic updates back on after the user has turned them off.
Third: Give me some granularity as to what type of updates I want to receive. There are three basic types of updates: Security patches, bug fixes, and enhancements. I rarely want all three. Within those three, there are (or should be) several levels of criticality to choose from. I may want security fixes that are critical, but not those that are merely nice for grandma to have. Let me choose.
Fourth: Don't ever, ever make a user reboot the machine.
Fifth: Let me have the option, stupid as it sounds, of checking for updates at an interval of my choosing. Not just "daily, weekly, or monthly." Let me actually specify a date (e.g., December 25) on which to check for updates and receive them all in a huge, bandwidth-choking download that utterly shuts me out of the machine for 24 hours instead of torturing me daily, throughout the year, with paper cuts.
Sixth: Write better software. Don't let so many security vulnerabilities go into distribution in the first place. Open-source as many pieces of your code as possible so the community can find security flaws before ordinary users do. Don't make the user do your security-QA.
Microsoft, Sun, (Oracle), Adobe, are you listening?
Thursday, April 23, 2009
Appliance-Oriented Architecture
I hate industry-jargon buzzwords, but I think it's not too early to promote a new one for 2010. I'm suggesting Appliance-Oriented Architecture (AOA). And yes, I think it just may be the Next Big Thing in IT (assuming IT isn't dead).
The big "Aha!" moment for me on this came when I was thinking about the Oracle Sun deal and realized that the true consequence of it was (is) that Oracle now enters the hardware biz, after being a pure software company since the beginning.
What does being a hardware company do for Oracle? It allows the company to create special-purpose hardware-software rollups known, colloquially, as appliances.
The marketing implications are far-reaching, of course, but consider the technical implications: Oracle gets to control the tuning and optimization of its software straight down to the bare metal. (And we know Oracle likes control.) Performance takes a huge jump when you can optimize for the hardware -- and for the OS. Let us not forget, Sun is an operating system company as well.
The possible synergies for Oracle of having direct control over hardware, OS, and software as a unified package are enormous.
What would Oracle put inside an appliance? How about a database-warehouse stack that "just works," for starters. But let's don't limit our thinking to databases. Remember, Oracle is also in the search business (with Oracle Secure Enterprise Search). Oracle gains the potential to introduce a search appliance to go head-to-head with Google. Oracle is also an ECM player. Let your imagination run wild.
In this context, the Sun deal is understandable as an Oracle response to the soon-to-be-previewed HP-Microsoft "Midas" appliance. Which I again see jumpstarting a move to Appliance Oriented Architecture.
Like all buzzwords, AOA encapsulates concepts and methodologies that are already in wide practice today (but haven't been rolled up, semantically, under one catchphrase). So let's not get carried away over-analyzing the term itself. The IT fantasy of plug-and-play black boxes that can be gridded together into an instant solution to hard problems is going to remain just that: a fantasy. AOA doesn't change it.
I do think, though, that the success of the Google appliance(s) has proven the existence of an untapped market for enterprise blackboxware, a market whose potential will be exploited in new and exciting ways by Oracle, Microsoft, HP, and others, going forward. We'll see BI-in-a-box, search-in-a-box, and just-about-everything-else-in-a-box, possibly including boxes in a box (think search-on-a-blade, BI-on-a-blade, and so on).
Put it on your calendar: Q1, 2010. AOA becomes real.
The big "Aha!" moment for me on this came when I was thinking about the Oracle Sun deal and realized that the true consequence of it was (is) that Oracle now enters the hardware biz, after being a pure software company since the beginning.
What does being a hardware company do for Oracle? It allows the company to create special-purpose hardware-software rollups known, colloquially, as appliances.
The marketing implications are far-reaching, of course, but consider the technical implications: Oracle gets to control the tuning and optimization of its software straight down to the bare metal. (And we know Oracle likes control.) Performance takes a huge jump when you can optimize for the hardware -- and for the OS. Let us not forget, Sun is an operating system company as well.
The possible synergies for Oracle of having direct control over hardware, OS, and software as a unified package are enormous.
What would Oracle put inside an appliance? How about a database-warehouse stack that "just works," for starters. But let's don't limit our thinking to databases. Remember, Oracle is also in the search business (with Oracle Secure Enterprise Search). Oracle gains the potential to introduce a search appliance to go head-to-head with Google. Oracle is also an ECM player. Let your imagination run wild.
In this context, the Sun deal is understandable as an Oracle response to the soon-to-be-previewed HP-Microsoft "Midas" appliance. Which I again see jumpstarting a move to Appliance Oriented Architecture.
Like all buzzwords, AOA encapsulates concepts and methodologies that are already in wide practice today (but haven't been rolled up, semantically, under one catchphrase). So let's not get carried away over-analyzing the term itself. The IT fantasy of plug-and-play black boxes that can be gridded together into an instant solution to hard problems is going to remain just that: a fantasy. AOA doesn't change it.
I do think, though, that the success of the Google appliance(s) has proven the existence of an untapped market for enterprise blackboxware, a market whose potential will be exploited in new and exciting ways by Oracle, Microsoft, HP, and others, going forward. We'll see BI-in-a-box, search-in-a-box, and just-about-everything-else-in-a-box, possibly including boxes in a box (think search-on-a-blade, BI-on-a-blade, and so on).
Put it on your calendar: Q1, 2010. AOA becomes real.
Wednesday, April 22, 2009
Where are the RIA "killer apps"?
I've found, over the years, that in almost every successful field of technology there's a "killer app," a category-leader so strong as to be universally understood as the archetype of success in a given domain. Conversely, when a technology lacks a killer app, it tends to be very telling. It says something about the future of that technology.
Take Java, for example. When Java first arrived, there were high hopes for its success based on the "write once, run anywhere" mantra. Applets started showing up all over the Web. But on the desktop, no killer apps. And even in the applet world, no killer apps, just a bunch of little games and academic demos. (Java's "killer app," the thing that would ensure its place in history, didn't really arrive until 1999: something called J2EE.)
So when a new technology-space like RIA comes along, with contenders having fancy names like AIR, Silverlight, or JavaFX, I sit back and wait for a "killer app" to emerge, signalling the appearance of a likely winner (or at least a contender with a future ahead of it) in the multi-way battle.
JavaFX was late to the party, so I continue to give it the benefit of the doubt, but it looks stillborn to me at this point (and I think the Oracle acquisition of Sun may delay progress with JavaFX until far past the point where it can regain ground against Adobe Flex/AIR). One thing we can all agree on is that there is no killer JavaFX app. In fact I can't even name a JavaFX app. Not a single one. "But it's too early," someone will say. To the contrary, my friend: It may be too late.
Silverlight has the full mass and motive power of the Microsoft juggernaut behind it, and for that reason we can't dismiss it (yet). But again, where are the killer apps? Shouldn't we have seen one by now? Shouldn't it be possible to walk up behind someone at any gathering of programmers, tap a total stranger on the shoulder, and get an immediate answer to the question: "Can you name a really cool Silverlight app?"
Yes, it's early.
And then there's Adobe with its shiny new AIR technology, built atop half-open, half-closed Flash and Flex infastructure, an alluring platform with the not inconsiderable advantage of being built, largely, on ActionScript (hormone-enriched JavaScript). It's fun, it's pretty, it's new. But where are the killer apps?
Actually, there's a class of killer apps built around AIR now. (Maybe you've noticed?) It's called the Twitter Client. TweetDeck, Twhirl, AlertThingy, Toro, the list goes on and on. (Many of these are not just Twitter clients, of course. Some are perhaps better called social clients, since they interact with other services besides Twitter.)
Does this mean Adobe has won the RIA wars? No, of course not. But it sure has a nice head start.
What we need to see now is whether additional killer-app categories start to emerge around AIR. If AIR progresses beyond the point of supporting fun little SoCo apps, things could get very interesting (for users of cell phones, palm devices, PCs, netbooks, laptops, readers, and who-knows-what-else) in a hurry.
If not -- if AIR remains the province of waist-slimming Twitter clients and zero-calorie RSS feed readers -- then we may have yet another evolutionary dead end along the lines of (dare I say it?) Java Man.
Time will tell.
Take Java, for example. When Java first arrived, there were high hopes for its success based on the "write once, run anywhere" mantra. Applets started showing up all over the Web. But on the desktop, no killer apps. And even in the applet world, no killer apps, just a bunch of little games and academic demos. (Java's "killer app," the thing that would ensure its place in history, didn't really arrive until 1999: something called J2EE.)
So when a new technology-space like RIA comes along, with contenders having fancy names like AIR, Silverlight, or JavaFX, I sit back and wait for a "killer app" to emerge, signalling the appearance of a likely winner (or at least a contender with a future ahead of it) in the multi-way battle.
JavaFX was late to the party, so I continue to give it the benefit of the doubt, but it looks stillborn to me at this point (and I think the Oracle acquisition of Sun may delay progress with JavaFX until far past the point where it can regain ground against Adobe Flex/AIR). One thing we can all agree on is that there is no killer JavaFX app. In fact I can't even name a JavaFX app. Not a single one. "But it's too early," someone will say. To the contrary, my friend: It may be too late.
Silverlight has the full mass and motive power of the Microsoft juggernaut behind it, and for that reason we can't dismiss it (yet). But again, where are the killer apps? Shouldn't we have seen one by now? Shouldn't it be possible to walk up behind someone at any gathering of programmers, tap a total stranger on the shoulder, and get an immediate answer to the question: "Can you name a really cool Silverlight app?"
Yes, it's early.
And then there's Adobe with its shiny new AIR technology, built atop half-open, half-closed Flash and Flex infastructure, an alluring platform with the not inconsiderable advantage of being built, largely, on ActionScript (hormone-enriched JavaScript). It's fun, it's pretty, it's new. But where are the killer apps?
Actually, there's a class of killer apps built around AIR now. (Maybe you've noticed?) It's called the Twitter Client. TweetDeck, Twhirl, AlertThingy, Toro, the list goes on and on. (Many of these are not just Twitter clients, of course. Some are perhaps better called social clients, since they interact with other services besides Twitter.)
Does this mean Adobe has won the RIA wars? No, of course not. But it sure has a nice head start.
What we need to see now is whether additional killer-app categories start to emerge around AIR. If AIR progresses beyond the point of supporting fun little SoCo apps, things could get very interesting (for users of cell phones, palm devices, PCs, netbooks, laptops, readers, and who-knows-what-else) in a hurry.
If not -- if AIR remains the province of waist-slimming Twitter clients and zero-calorie RSS feed readers -- then we may have yet another evolutionary dead end along the lines of (dare I say it?) Java Man.
Time will tell.
Tuesday, April 21, 2009
If Oracle-Sun is a Cloud Play, what was Ellison ranting about?
It seems a lot of pundits out there think the Oracle Sun acquisition is (to a large degree) about Oracle wanting to establish more of a foothold in the cloud-computing biz. I won't disagree with that.
What's bizarre, though, is that it wasn't that long ago (in fact, September 2008) that Larry Ellison drew some flak for his public rant on cloud computing, in which he called cloud computing "total gibberish." The YouTube audio track of it is here. (I wrote a blog for CMS Watch about some of this back in November.)
Here's some of what Ellison had to say last September:
The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. I can’t think of anything that isn’t cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?You tell us, Larry. You tell us.
Rating WCM and ECM vendor web sites for page loadability using YSlow
It occurred to me the other day that the people who sell Web Content Management System software are (supposedly) experts in Web technology; and presumably they use their own software to build their own corporate Web sites (following the well-known Dogfood Pattern); and therefore their home pages ought to be pretty good examples of what it means to build a highly functional, performant Web page that downloads quickly and displays nicely.
To get a handle on this, I decided to use YSlow to evaluate the "loadability" of various vendors' home pages. If you haven't heard about it before, YSlow is a Firefox plug-in (or "add-on," I guess) that analyzes web pages and tells you why they're slow based on Yahoo's rules for high performance web sites. (Note that to use YSlow, you first need to install Firebug, a highly useful add-on in its own right. Every Firefox user should have this add-on. It's a terrific tool.)
It's important to understand what YSlow is not. It is not primarily a profiling tool (in my opinion, at least). The point of YSlow isn't to measure page load-times. It's to score pages based on a static analysis of their design-for-loadability. There are certain well-known best practices for making pages load faster. YSlow can look at a page and tell if those best-practices are being followed, and to what degree.
YSlow assigns letter grades (A thru F) for a page in each of 13 categories of best-practice. I decided to run YSlow against the home pages of 35 well-known WCM and/or ECM vendors, then calculate a Grade Point Average. The scores are posted below.
Please note that the full results, with a detailed breakout of exactly how each vendor did in each of the 13 YSlow categories, is available in a (free) 121-page report that I put together over the weekend. The 1-megabyte PDF can be downloaded here. It contains some important caveats about interpreting the results, and also talks about methodology.
Once again, I urge you not to draw any conclusions before reading the PDF (which contains detailed information about how these numbers were obtained). The 121-page document can be downloaded here. (Note: The PDF does contain bookmarks for easy navigation. They may not be showing when you first open the file. Use Control-B to toggle bookmark-navtree visibility.)
Maybe others can undertake similar sorts of testing (I'd particularly like to see some actual timing results, comparing page load times for the various vendor pages, although this can be notoriously tricky to set up). If so, let me know.
Does it mean a whole lot? Not really. I think it just means some vendors have more of an opportunity than others, perhaps, to improve the performance of their home pages. But a lot of factors are at play any time you talk about Web site performance, obviously, and therefore it's not really fair to form any kind of final judgment based on the scores shown here. Use it as a starting point for further discussion, perhaps.
To get a handle on this, I decided to use YSlow to evaluate the "loadability" of various vendors' home pages. If you haven't heard about it before, YSlow is a Firefox plug-in (or "add-on," I guess) that analyzes web pages and tells you why they're slow based on Yahoo's rules for high performance web sites. (Note that to use YSlow, you first need to install Firebug, a highly useful add-on in its own right. Every Firefox user should have this add-on. It's a terrific tool.)
It's important to understand what YSlow is not. It is not primarily a profiling tool (in my opinion, at least). The point of YSlow isn't to measure page load-times. It's to score pages based on a static analysis of their design-for-loadability. There are certain well-known best practices for making pages load faster. YSlow can look at a page and tell if those best-practices are being followed, and to what degree.
YSlow assigns letter grades (A thru F) for a page in each of 13 categories of best-practice. I decided to run YSlow against the home pages of 35 well-known WCM and/or ECM vendors, then calculate a Grade Point Average. The scores are posted below.
Please note that the full results, with a detailed breakout of exactly how each vendor did in each of the 13 YSlow categories, is available in a (free) 121-page report that I put together over the weekend. The 1-megabyte PDF can be downloaded here. It contains some important caveats about interpreting the results, and also talks about methodology.
VENDOR | GPA |
Alfresco | 2.27 |
Alterian | 2.18 |
Clickability | 2.72 |
CoreMedia | 3.09 |
CrownPeak | 2.90 |
Day | 3.09 |
Drupal | 3.18 |
Ektron | 2.63 |
EMC | 1.81 |
Enonic | 3.36 |
EPiServer | 2.18 |
Escenic | 2.72 |
eZ | 2.63 |
FatWire | 2.18 |
FirstSpirit (e-Spirit) | 3.27 |
Hannon Hill | 3.18 |
Hot Banana (Lyris) | 2.18 |
Ingeniux | 1.90 |
Interwoven.com | 1.81 |
Joomla! | 2.81 |
Magnolia | 3.27 |
Nstein | 2.27 |
Nuxeo | 2.09 |
OpenCMS | 2.18 |
Oracle | 3.18 |
Open Text | 2.27 |
PaperThin | 2.72 |
Percussion | 1.36 |
Plone | 3.09 |
Refresh Software | 2.54 |
Sitecore | 3.00 |
TerminalFour | 2.27 |
Tridion | 2.00 |
TYPO3 | 2.90 |
Vignette | 1.81 |
Maybe others can undertake similar sorts of testing (I'd particularly like to see some actual timing results, comparing page load times for the various vendor pages, although this can be notoriously tricky to set up). If so, let me know.
Does it mean a whole lot? Not really. I think it just means some vendors have more of an opportunity than others, perhaps, to improve the performance of their home pages. But a lot of factors are at play any time you talk about Web site performance, obviously, and therefore it's not really fair to form any kind of final judgment based on the scores shown here. Use it as a starting point for further discussion, perhaps.
Monday, April 20, 2009
Making Infectious Memes Fashionable
A parcel came today bearing a T-shirt. I nearly fell over laughing! I'm still laughing.
Thank you, Adriaan Bloem, for this wonderful gift.
And for those who want to know exactly what the inside joke is: please proceed directly to Jon's blog post here. (And see my comment immediately following it.)
Saturday, April 18, 2009
What the heck is a meme anyway?
In recent weeks, I've been accused of something no one has even accused me of before: creating a meme. The charge seems weak to me, though, based on my understanding of "meme." But let's review.
On 26 February 2009, I wrote a blog for CMS Watch called "A Reality Checklist for Vendors" in which I enumerated 15 things that a CMS software vendor (but really, any software vendor) needs to do these days in order to stay relevant. Things like posting a free downloadable eval version of your software on your company web site; eating your own dogfood (the vendor should use its own software to create its website); and having one pricesheet for all customers (we don't quote ten different prices to ten different customers). Simple things, basic sanity-check items. For the full list, go here.
Not long thereafter, on 17 March, Michael Marth wrote a blog at dev.day.com ("Introducing the CMS Vendor Meme") giving Day Software's answers to all 15 checklist items in my Reality Checklist. Not only that, Michael created a scoring system, assigned scores to Day's answers, and challenged ("tagged") several other vendors to respond in like manner.
This set off a flood of responses from vendors (including many vendors who weren't tagged by anyone), and the results are still coming in. The situation is well-captured by Jon Marks in his excellent series of blog posts here, where (incidentally) he calls it a Celebrity CMS Deathmatch.
As a result of all this, I've been accused of starting a meme, which makes me want to understand "meme" better. So I've done a little digging and found the internet definitions of meme rather unsatisfying. They seem sloppy, semantically speaking. Maybe that's just in the nature of memes.
Some definitions equate meme with slang. (But in that case, why not just stick with slang?) Other definitions point in the direction of slang with a pop-culture theme. Or anything on the internet that has a catchy phrase associated with it. It gets even sloppier: If you go to http://knowyourmeme.com, you find things like Yo Dawg, Advice Dog, and I Like Turtles. (But oddly, not WTF?)
Some people feel that meme gives memetics a bad name.
What I've decided is that it's easier for me to understand meme in terms of its characteristics rather than a declarative definition. From what I can tell, a meme has characteristics of:
1. Theme: A meme captures a theme
2. Originality: in a new way, with new nuancing
3. Compositionality: New nuance is achieved by combining other terms and themes.
4. Emergent lexical cohesion (tm): Through suitable juxtaposition of imagery, slang, conceptual archetypes, etc., it becomes apparent to the first-time listener that a familiar notion is encapsulated in the meme. That is, a person hearing it for the first time can synthesize the intended meaning, even if the meaning is unexpected.
5. Transmissibility: The meme is easily communicated from one person to another.
6. Contagion: A meme usually spreads. If it didn't, it wouldn't enter the common lexicon.
That's still not a satisfying definition of "meme," to me, but it captures a lot more of it than the definitions I've seen floating around on the Web.
So I guess maybe I am guilty of creating a meme, if "We Get It" combined with "checklist" combined with "CMS Vendors" produces a meme. But it seems weakly reachable somehow.
"10 Things About {X}" seems to qualify as a meme, though.
Tagging someone to get them to participate in a meme-off seems not a meme but a pattern. But then again, maybe patterns are memes.
And so, to finish off this post, I invite commenters to answer the following queston: How many memes you can find in this blog post? I see quite a few. But I am interested in knowing what others see in terms of memes.
Also, a challenge (extra points, and attribution, to anyone who answers this correctly). Explain the following meme:
åŽšé»‘å¸ åŽšé»‘å¦
(It's one of my favorites.)
On 26 February 2009, I wrote a blog for CMS Watch called "A Reality Checklist for Vendors" in which I enumerated 15 things that a CMS software vendor (but really, any software vendor) needs to do these days in order to stay relevant. Things like posting a free downloadable eval version of your software on your company web site; eating your own dogfood (the vendor should use its own software to create its website); and having one pricesheet for all customers (we don't quote ten different prices to ten different customers). Simple things, basic sanity-check items. For the full list, go here.
Not long thereafter, on 17 March, Michael Marth wrote a blog at dev.day.com ("Introducing the CMS Vendor Meme") giving Day Software's answers to all 15 checklist items in my Reality Checklist. Not only that, Michael created a scoring system, assigned scores to Day's answers, and challenged ("tagged") several other vendors to respond in like manner.
This set off a flood of responses from vendors (including many vendors who weren't tagged by anyone), and the results are still coming in. The situation is well-captured by Jon Marks in his excellent series of blog posts here, where (incidentally) he calls it a Celebrity CMS Deathmatch.
As a result of all this, I've been accused of starting a meme, which makes me want to understand "meme" better. So I've done a little digging and found the internet definitions of meme rather unsatisfying. They seem sloppy, semantically speaking. Maybe that's just in the nature of memes.
Some definitions equate meme with slang. (But in that case, why not just stick with slang?) Other definitions point in the direction of slang with a pop-culture theme. Or anything on the internet that has a catchy phrase associated with it. It gets even sloppier: If you go to http://knowyourmeme.com, you find things like Yo Dawg, Advice Dog, and I Like Turtles. (But oddly, not WTF?)
Some people feel that meme gives memetics a bad name.
What I've decided is that it's easier for me to understand meme in terms of its characteristics rather than a declarative definition. From what I can tell, a meme has characteristics of:
1. Theme: A meme captures a theme
2. Originality: in a new way, with new nuancing
3. Compositionality: New nuance is achieved by combining other terms and themes.
4. Emergent lexical cohesion (tm): Through suitable juxtaposition of imagery, slang, conceptual archetypes, etc., it becomes apparent to the first-time listener that a familiar notion is encapsulated in the meme. That is, a person hearing it for the first time can synthesize the intended meaning, even if the meaning is unexpected.
5. Transmissibility: The meme is easily communicated from one person to another.
6. Contagion: A meme usually spreads. If it didn't, it wouldn't enter the common lexicon.
That's still not a satisfying definition of "meme," to me, but it captures a lot more of it than the definitions I've seen floating around on the Web.
So I guess maybe I am guilty of creating a meme, if "We Get It" combined with "checklist" combined with "CMS Vendors" produces a meme. But it seems weakly reachable somehow.
"10 Things About {X}" seems to qualify as a meme, though.
Tagging someone to get them to participate in a meme-off seems not a meme but a pattern. But then again, maybe patterns are memes.
And so, to finish off this post, I invite commenters to answer the following queston: How many memes you can find in this blog post? I see quite a few. But I am interested in knowing what others see in terms of memes.
Also, a challenge (extra points, and attribution, to anyone who answers this correctly). Explain the following meme:
åŽšé»‘å¸ åŽšé»‘å¦
(It's one of my favorites.)
Friday, April 17, 2009
CMIS: a standard in search of scenarios?
I've been looking all over the place for use-cases and user stories that illustrate the key requirements for CMIS (Content Management Interoperability Services, soon to be an OASIS-blessed standard API for content management system interoperability). As far as I can tell, CMIS is being developed without a proper set of real-world use-cases. I prefer "user narratives" over "use cases" because the latter often is nothing more than a phrase or two, whereas a narrative is just what it sounds like: A sentence-by-sentence explanation of a chain of events. A user narrative captures intent, actors, actions, results, consequences.
I'm finding none of that in CMIS, except for four rather trivial use-case descriptions in http://xml.coverpages.org/CMIS-v05-Appendices.pdf.
I gather from reading some of the Technical Committee's minutes that people have taken "develop use cases" as action items. That's good.
Going ahead with a spec without first understanding the use-cases leads to things like the CMIS "policy object," which I mentioned once before as something that should be (and I think will be) dropped from CMIS.
"Policy" should be dropped for two reasons. One is that it slows things down. If you want to get a standard out fast, don't make it bigger than it needs to be. Second, it's not at all clear what "policy" means. Various people have said it is basically "access control," whereas at least one CMIS expert has said that the policy object can support retention policies. Those are two quite different things.
In any case, CMIS-Policy belongs in its own separate standards effort (if indeed it has any need to exist; and for that, we need user narratives). It's out-of-band here, IMHO. It's not core.
I'm sure people involved with CMIS are very busy drawing up scenarios and user stories, and we'll hear more about it very shortly. Personally, I'd like to see some detailed scenarios around the manipulation of compound documents. I have some concerns there, but that discussion will have to wait for another time.
It's exciting to watch CMIS come together, in any case. A year from now, we may be seeing some very interesting content-management (and search) mashups. I wish I knew what they're going to look like. It does set the imagination spinning, though. No question about that.
I'm finding none of that in CMIS, except for four rather trivial use-case descriptions in http://xml.coverpages.org/CMIS-v05-Appendices.pdf.
I gather from reading some of the Technical Committee's minutes that people have taken "develop use cases" as action items. That's good.
Going ahead with a spec without first understanding the use-cases leads to things like the CMIS "policy object," which I mentioned once before as something that should be (and I think will be) dropped from CMIS.
"Policy" should be dropped for two reasons. One is that it slows things down. If you want to get a standard out fast, don't make it bigger than it needs to be. Second, it's not at all clear what "policy" means. Various people have said it is basically "access control," whereas at least one CMIS expert has said that the policy object can support retention policies. Those are two quite different things.
In any case, CMIS-Policy belongs in its own separate standards effort (if indeed it has any need to exist; and for that, we need user narratives). It's out-of-band here, IMHO. It's not core.
I'm sure people involved with CMIS are very busy drawing up scenarios and user stories, and we'll hear more about it very shortly. Personally, I'd like to see some detailed scenarios around the manipulation of compound documents. I have some concerns there, but that discussion will have to wait for another time.
It's exciting to watch CMIS come together, in any case. A year from now, we may be seeing some very interesting content-management (and search) mashups. I wish I knew what they're going to look like. It does set the imagination spinning, though. No question about that.
Thursday, April 16, 2009
Does workflow always have to suck?
I evaluate CMS and DAM systems for a living, and one thing I keep coming back to is the fact that so very few of these expensive systems "do workflow" well. I think part of this may be because there are no industry-accepted standards around the kind of workflow I'm talking about (thus, every vendor reinvents the wheel). The closest thing to a standard is BPEL4People, which is an extension of BPEL (and thus too heavy, IMHO). There needs to be a minimalist standard around this domain space, something dedicated to human-facing interactions, supporting process-facing tasks optionally, not the other way around.
I think the other reason so many Web CMS and DAM vendors fail to do a nice job with workflow is that it's just plain hard. Light-duty taskflow or workflow (or "pageflow," as we called it where I once worked) is deceptively difficult to implement, especially if there's a requirement for good UIs around administration, design, and (re)configuration of workflows. And especially if there's a requirement for hot failover (being able to deal with STONITH and other messinesses). And especially if you need to support acyclic (reentrant) flows. And especially if you want to offer good extensibility APIs. And, and, and.
Most systems that support approval workflows (of the type seen in web publishing scenarios) get the basics right, but after that, not. Typically, though, the customer hasn't really thought out his or her use-cases very well before buying a system. And so begins a long process of design, test, rollout, fail, back to the drawing board.
Setting up workflows typically means developers need to touch XML, code, properties files, templates, and/or miscellaneous artifacts, often editing them by hand (since it's unusual to get good tools for this). You may be able to draw a basic flow on a canvas (although even that isn't done well, by many vendors), but applying timeout and retry policies, and handlers for exceptional conditions, may involve a good bit of "dirty fingernails" work. When you're all done, the customer thinks "Okay, done. This should last us for all eternity. Glad we never have to do that again!" But very soon, it becomes clear that a number of corner-cases that were not anticipated at the design stage need to be handled better. So it's surgery time again. Back to messing with a bunch of artifacts and their cobweb of dependencies, then finding a way to test it all, etc.
Administratration is often not well supported. How do you run a report for all workflows of Type X in the past month that either finished abnormally, didn't finish at all, or took too long? (Don't tell me "look in the logs.") What UI tools do you have for simply finding an orphaned workflow, or killing an in-progress workflow instance? How do you know if bouncing one machine (in a cluster) left one or more workflows (of potentially hundreds in progress) in an inconsistent state?
And then there's rights administration. When Sally goes on vacation, how does she give her subordinate, Bob, her "rights" in the system for a workflow of Type A but not for workflows of Type B?
The issues get sticky in a hurry. But I do occasionally see workflow systems that combine decent functionality with a usable graphic designer, or with good administrative tools. But it's hard to get a robust engine, a good feature set, good visual design tools, and good administrative tools all in one package. There are always warts and holes.
So I think the right way to look at all this, if you're a vendor, is that this presents a ripe opportunity. If you're a CMS or DAM vendor looking to differentiate, provide a superior workflow solution.
But there's also an opportunity in the market, right now (IMHO), for someone to come up with a fully productized lightweight workflow product with decent design, development, and admin tools, and easy extensibility (of course), that can be bolted onto a Web CMS or DAM system with little effort, so that customers who are using bespoke WF systems (of the kind that are so common in this industry) can move over to "real" workflow. And get real work done.
I wonder why products like this aren't more common? Again: Probably because it's hard. But again: This is an opportunity . . . for someone.
I think the other reason so many Web CMS and DAM vendors fail to do a nice job with workflow is that it's just plain hard. Light-duty taskflow or workflow (or "pageflow," as we called it where I once worked) is deceptively difficult to implement, especially if there's a requirement for good UIs around administration, design, and (re)configuration of workflows. And especially if there's a requirement for hot failover (being able to deal with STONITH and other messinesses). And especially if you need to support acyclic (reentrant) flows. And especially if you want to offer good extensibility APIs. And, and, and.
Most systems that support approval workflows (of the type seen in web publishing scenarios) get the basics right, but after that, not. Typically, though, the customer hasn't really thought out his or her use-cases very well before buying a system. And so begins a long process of design, test, rollout, fail, back to the drawing board.
Setting up workflows typically means developers need to touch XML, code, properties files, templates, and/or miscellaneous artifacts, often editing them by hand (since it's unusual to get good tools for this). You may be able to draw a basic flow on a canvas (although even that isn't done well, by many vendors), but applying timeout and retry policies, and handlers for exceptional conditions, may involve a good bit of "dirty fingernails" work. When you're all done, the customer thinks "Okay, done. This should last us for all eternity. Glad we never have to do that again!" But very soon, it becomes clear that a number of corner-cases that were not anticipated at the design stage need to be handled better. So it's surgery time again. Back to messing with a bunch of artifacts and their cobweb of dependencies, then finding a way to test it all, etc.
Administratration is often not well supported. How do you run a report for all workflows of Type X in the past month that either finished abnormally, didn't finish at all, or took too long? (Don't tell me "look in the logs.") What UI tools do you have for simply finding an orphaned workflow, or killing an in-progress workflow instance? How do you know if bouncing one machine (in a cluster) left one or more workflows (of potentially hundreds in progress) in an inconsistent state?
And then there's rights administration. When Sally goes on vacation, how does she give her subordinate, Bob, her "rights" in the system for a workflow of Type A but not for workflows of Type B?
The issues get sticky in a hurry. But I do occasionally see workflow systems that combine decent functionality with a usable graphic designer, or with good administrative tools. But it's hard to get a robust engine, a good feature set, good visual design tools, and good administrative tools all in one package. There are always warts and holes.
So I think the right way to look at all this, if you're a vendor, is that this presents a ripe opportunity. If you're a CMS or DAM vendor looking to differentiate, provide a superior workflow solution.
But there's also an opportunity in the market, right now (IMHO), for someone to come up with a fully productized lightweight workflow product with decent design, development, and admin tools, and easy extensibility (of course), that can be bolted onto a Web CMS or DAM system with little effort, so that customers who are using bespoke WF systems (of the kind that are so common in this industry) can move over to "real" workflow. And get real work done.
I wonder why products like this aren't more common? Again: Probably because it's hard. But again: This is an opportunity . . . for someone.
Tuesday, April 14, 2009
Should standards be copyrighted?
In the last few days I've begun to sink my teeth into the CMIS (Content Management Interoperability Services) standards documents a little bit. Digesting it all is going to take a while. The docs are not too big (yet), but I'm a slow reader.
One thing that's a little weird to me is that the drafts of the standard (available at the above link) carry a Copyright notice on behalf of EMC, IBM, and Microsoft.
I find this peculiar for a standards document that is supposed to be the collaborative work of numerous industry players (including Alfresco, Oracle, Open Text, and others). I'm sure it just means that the particular instance-documents comprising the draft of the standard were written by people from EMC, IBM, and Microsoft, and the companies in question decided (based on some sort of policy emanating from Legal) to assert ownership over the instance-docs.
Why have a copyright at all, though? This is going to be an industry standard, not an EMC standard, or an IBM or Microsoft standard. Copyright means you and I and others can't reproduce the document without permission. (It does say "All rights reserved.")
Someone will say "Well, this is the way IETF does it," or "This is the way [XYZ] does it," which of course is silly. That's not a defense. IETF shouldn't copyright anything either.
What does copyrighting a standards document achieve? Is it supposed to prevent bastardization of the standard by someone else who tries to publish a different version of it? That's not what copyright does. Copyright does not establish the "sole authoritative source-ness" of a document. It does not say "This is the Truth, this is the one true document defining the Standard." That's the job of the standards body. OASIS decides what the true CMIS standard consists of. And that "truth" can reside in an uncopyrighted work, just as easily as in a copyrighted work.
Putting copyrights on standards just does not make sense to me. It doesn't achieve anything except to inhibit reproduction and dissemination of the primary docs. Which is usually not a goal of the standards process (or shouldn't be). Standards should be widely disseminated. Copyright is designed to defeat that.
A nit, perhaps. But for me, not.
One thing that's a little weird to me is that the drafts of the standard (available at the above link) carry a Copyright notice on behalf of EMC, IBM, and Microsoft.
I find this peculiar for a standards document that is supposed to be the collaborative work of numerous industry players (including Alfresco, Oracle, Open Text, and others). I'm sure it just means that the particular instance-documents comprising the draft of the standard were written by people from EMC, IBM, and Microsoft, and the companies in question decided (based on some sort of policy emanating from Legal) to assert ownership over the instance-docs.
Why have a copyright at all, though? This is going to be an industry standard, not an EMC standard, or an IBM or Microsoft standard. Copyright means you and I and others can't reproduce the document without permission. (It does say "All rights reserved.")
Someone will say "Well, this is the way IETF does it," or "This is the way [XYZ] does it," which of course is silly. That's not a defense. IETF shouldn't copyright anything either.
What does copyrighting a standards document achieve? Is it supposed to prevent bastardization of the standard by someone else who tries to publish a different version of it? That's not what copyright does. Copyright does not establish the "sole authoritative source-ness" of a document. It does not say "This is the Truth, this is the one true document defining the Standard." That's the job of the standards body. OASIS decides what the true CMIS standard consists of. And that "truth" can reside in an uncopyrighted work, just as easily as in a copyrighted work.
Putting copyrights on standards just does not make sense to me. It doesn't achieve anything except to inhibit reproduction and dissemination of the primary docs. Which is usually not a goal of the standards process (or shouldn't be). Standards should be widely disseminated. Copyright is designed to defeat that.
A nit, perhaps. But for me, not.
Monday, April 13, 2009
Coming to grips with CMIS
I'm slowly but surely coming to grips with CMIS (Content Management Interoperability Services), which will soon be the lingua franca of CRUD in the content management world, and maybe some other worlds as well.
After reading some of the CMIS draft docs and watching a couple of EMC's CMIS videos at YouTube, I'm starting to grok the basic abstractions. Here are a few first impressions. I offer these impressions as constructive criticism, BTW, not pot-shots. I want to see CMIS succeed. Which also means I want to see it done right.
The v0.5 draft doc for the Domain Model says there are four top-level ("first class", root) object types: Document, Folder, Relationship, and Policy. (Support for the Policy type is optional. So there are basically three root types.)
Already I question whether there shouldn't perhaps be a top-level object type ("CMISObject") that everything inherits from, rather than four root objects, since presumably all four basic object types will share at least a few characteristics in common. But maybe not.
Page 16 of the Part I doc says that Administration is out of scope for CMIS. But later on, we learn that "A policy object represents an administrative policy that can be enforced by a repository." We also find applyPolicy and removePolicy operations, which are clearly administrative in intent.
Remarkably, Policy objects can be manipulated through standard CMIS CRUD operations but do not have a content stream and are not versionable. However, they "may be" fileable, queryable, or controllable. Why are we treating this object as a file ("fileable") but not allowing it to be versionable? And why are we pretending it doesn't have a content stream? And why are we saying "may be"? This is too much fuzziness, it seems to me.
Right now, the way CMIS Part I is worded, a "policy" can be anything. One might as well call it Rules. Or Aspects. Or OtherStuff. The word Policy has a specific connotation, though. Where I come from, it implies things like compliance and governance, things that MAY intersect role constraints, separation of duties, RBAC, and possibly a lot more; and yes, these concepts do come up in content management, in the context of workflow. But it seems to me that policy, by any conventional definition, is rather far afield from where CMIS should be concentrating right now. If "policy" means something else here, let's have a good definition of it and let's hear the argument for why it should be exposed to client apps.
I say drop the Policy object type entirely. It's baggage. Keep the spec light.
I like the idea of having Relationships as a top-level object type. The notion here is that you can specify the designation of a source object and a target object that are related in some way that the two objects don't need to know about. I like it; it feels suitably abstract. And it models a construct that's used in all sorts of ways in content management systems today.
The Folder object type, OTOH, is too concrete for my tastes. We need to stop thinking in terms of "folder" (which is a playful non-geek term for "directory", designed to make file systems understandable by people who know about manila folders), and think more abstractly. What notion(s) are we really trying to encapsulate with the object type currently dubbed "Folder"? At first blush, it would seem as though navigability (navigational axes) constitute(s) the core notion, but the possible graphs allowed by Folder do not match popular navigational notions inherent in file-system folders (at least on Windows). In other words, the many-to-many parent-child mappings allowed by CMIS's Folders destroy the conventional "folder" metaphor, unless you're a computer science geek, in which case you don't think in terms of folders anyway.
I think what "Folder" should try to encapsulate is a Collection of Relationships. A navigation hierarchy (whether treelike or not) is just one possible subclass of such a collection. We cheat ourselves by trying to emulate, at the outset, some parochial notion of "folders" based on a particular type of graph. We need Folder to be more general. It is a Collection of Relationships. We already have Relationships, so why not take the opportunity to reuse them here?
I'd like to see more discussion about Folders, but I fear that the rush to get CMIS blessed by OASIS may have already precluded further discussion of this important issue. I hope I'm wrong.
Interesting stuff, though, this CMIS. And wow, do I still have a lot of grokking to do . . .
After reading some of the CMIS draft docs and watching a couple of EMC's CMIS videos at YouTube, I'm starting to grok the basic abstractions. Here are a few first impressions. I offer these impressions as constructive criticism, BTW, not pot-shots. I want to see CMIS succeed. Which also means I want to see it done right.
The v0.5 draft doc for the Domain Model says there are four top-level ("first class", root) object types: Document, Folder, Relationship, and Policy. (Support for the Policy type is optional. So there are basically three root types.)
Already I question whether there shouldn't perhaps be a top-level object type ("CMISObject") that everything inherits from, rather than four root objects, since presumably all four basic object types will share at least a few characteristics in common. But maybe not.
Page 16 of the Part I doc says that Administration is out of scope for CMIS. But later on, we learn that "A policy object represents an administrative policy that can be enforced by a repository." We also find applyPolicy and removePolicy operations, which are clearly administrative in intent.
Remarkably, Policy objects can be manipulated through standard CMIS CRUD operations but do not have a content stream and are not versionable. However, they "may be" fileable, queryable, or controllable. Why are we treating this object as a file ("fileable") but not allowing it to be versionable? And why are we pretending it doesn't have a content stream? And why are we saying "may be"? This is too much fuzziness, it seems to me.
Right now, the way CMIS Part I is worded, a "policy" can be anything. One might as well call it Rules. Or Aspects. Or OtherStuff. The word Policy has a specific connotation, though. Where I come from, it implies things like compliance and governance, things that MAY intersect role constraints, separation of duties, RBAC, and possibly a lot more; and yes, these concepts do come up in content management, in the context of workflow. But it seems to me that policy, by any conventional definition, is rather far afield from where CMIS should be concentrating right now. If "policy" means something else here, let's have a good definition of it and let's hear the argument for why it should be exposed to client apps.
I say drop the Policy object type entirely. It's baggage. Keep the spec light.
I like the idea of having Relationships as a top-level object type. The notion here is that you can specify the designation of a source object and a target object that are related in some way that the two objects don't need to know about. I like it; it feels suitably abstract. And it models a construct that's used in all sorts of ways in content management systems today.
The Folder object type, OTOH, is too concrete for my tastes. We need to stop thinking in terms of "folder" (which is a playful non-geek term for "directory", designed to make file systems understandable by people who know about manila folders), and think more abstractly. What notion(s) are we really trying to encapsulate with the object type currently dubbed "Folder"? At first blush, it would seem as though navigability (navigational axes) constitute(s) the core notion, but the possible graphs allowed by Folder do not match popular navigational notions inherent in file-system folders (at least on Windows). In other words, the many-to-many parent-child mappings allowed by CMIS's Folders destroy the conventional "folder" metaphor, unless you're a computer science geek, in which case you don't think in terms of folders anyway.
I think what "Folder" should try to encapsulate is a Collection of Relationships. A navigation hierarchy (whether treelike or not) is just one possible subclass of such a collection. We cheat ourselves by trying to emulate, at the outset, some parochial notion of "folders" based on a particular type of graph. We need Folder to be more general. It is a Collection of Relationships. We already have Relationships, so why not take the opportunity to reuse them here?
I'd like to see more discussion about Folders, but I fear that the rush to get CMIS blessed by OASIS may have already precluded further discussion of this important issue. I hope I'm wrong.
Interesting stuff, though, this CMIS. And wow, do I still have a lot of grokking to do . . .
Sunday, April 12, 2009
10 things about me
Since today is Easter Sunday and I can be pretty sure no one in the western hemisphere will be reading this blog today, I thought maybe it's as good a day as any to write a near-content-free "off topic" blog. So, 10 things about me. Here goes:
1. I grew up in Los Angeles.
Things I remember from childhood: The ground shudders very slightly whenever a nuclear bomb goes off at the nearby Nevada test site (250 miles away). I also remember sonic booms happening practically daily. (Edwards AFB was 112 miles distant. The X-planes flew almost every day.) I once saw telephone lines whirl like jump-ropes during an earthquake. This was a long time ago. Think Jailhouse Rock.
2. Aviation has been a big part of my life.
I've been a pilot (ASMEL/Instrument) for a long time and made a living writing about it for many years. I've lost around 30% of the hearing in one ear due to so much time spent in noisy cockpits.
3. I have degrees in biology and microbiology that I've never used.
University of California, Irvine (B.S.), U.C. Davis (M.A.)
4. Money means little to me.
Which is why I have none.
5. I started a monthly publication in 1979 that is still in publication today.
Through that desktop publishing business and one other, I learned a lot about direct marketing (I've designed, written, produced, and tracked direct mail campaigns encompassing millions of pieces of "junk mail." That all stopped when the Internet came along, of course.) Thanks to DTP, I have been able to spend most of my career self-employed.
6. The most fun thing I've ever done as a big-company employee was serve on Novell's Inventions Committee.
I got to examine, and vote on, patent proposals submitted from Novell engineers (and some non-engineers) all over the world. The other committee members, mostly Distinguished Engineers, were a joy to work with. I learned a lot about software patents and how they figure into corporate strategy. I also picked up a lot of technology knowledge.
7. I'm a slow reader.
In short bursts, I can go as slow as 30 words per minute.
8. I'm a coffee slut.
I'll drink any coffee of any kind, anywhere, any time; the blacker the better.
9. I'm mildly agoraphobic and like to lock myself in hotel rooms.
It takes incredible energy for me to feel like coming out of a hotel room once I'm in it. Unless, of course, coffee is involved.
10. My secret passion is pastel portraiture.
I don't want anyone to feel obligated to do the "10 Things" thing if they don't want to, but if I could nominate (tag) people whom I'd like to see do this, it would be the wonderful people on my Blogroll. Especially Irina, Jon, Julian, Lee, and Pie. Anyone care to step forward? You're it.
1. I grew up in Los Angeles.
Things I remember from childhood: The ground shudders very slightly whenever a nuclear bomb goes off at the nearby Nevada test site (250 miles away). I also remember sonic booms happening practically daily. (Edwards AFB was 112 miles distant. The X-planes flew almost every day.) I once saw telephone lines whirl like jump-ropes during an earthquake. This was a long time ago. Think Jailhouse Rock.
2. Aviation has been a big part of my life.
I've been a pilot (ASMEL/Instrument) for a long time and made a living writing about it for many years. I've lost around 30% of the hearing in one ear due to so much time spent in noisy cockpits.
3. I have degrees in biology and microbiology that I've never used.
University of California, Irvine (B.S.), U.C. Davis (M.A.)
4. Money means little to me.
Which is why I have none.
5. I started a monthly publication in 1979 that is still in publication today.
Through that desktop publishing business and one other, I learned a lot about direct marketing (I've designed, written, produced, and tracked direct mail campaigns encompassing millions of pieces of "junk mail." That all stopped when the Internet came along, of course.) Thanks to DTP, I have been able to spend most of my career self-employed.
6. The most fun thing I've ever done as a big-company employee was serve on Novell's Inventions Committee.
I got to examine, and vote on, patent proposals submitted from Novell engineers (and some non-engineers) all over the world. The other committee members, mostly Distinguished Engineers, were a joy to work with. I learned a lot about software patents and how they figure into corporate strategy. I also picked up a lot of technology knowledge.
7. I'm a slow reader.
In short bursts, I can go as slow as 30 words per minute.
8. I'm a coffee slut.
I'll drink any coffee of any kind, anywhere, any time; the blacker the better.
9. I'm mildly agoraphobic and like to lock myself in hotel rooms.
It takes incredible energy for me to feel like coming out of a hotel room once I'm in it. Unless, of course, coffee is involved.
10. My secret passion is pastel portraiture.
I don't want anyone to feel obligated to do the "10 Things" thing if they don't want to, but if I could nominate (tag) people whom I'd like to see do this, it would be the wonderful people on my Blogroll. Especially Irina, Jon, Julian, Lee, and Pie. Anyone care to step forward? You're it.
Saturday, April 11, 2009
The principle of Last Responsible Moment
This post has moved to: http://asserttrue.blogspot.com/2014/08/last-responsible-moment.html. Please forgive the inconvenience.
A military officer who was about to retire once reportedly said: "The most important thing I did in my career was to teach young leaders that whenever they saw a threat, their first job was to determine the timebox for their response. Their second job was to hold off making a decision until the end of the timebox, so that they could make it based on the best possible data."
This is an illustration of a principle that I think is (sadly) underutilized not only in R&D circles but in project planning generally, namely the principle of delaying decisions until the "last responsible moment" (championed by the Poppendiecks and others). The key intuition is that crucial decisions are best made when as much information as possible has been taken into account.
This is a good tactic when the following criteria are met:
1. Not all requirements for success are known in advance
2. The decision has huge downstream consequences
3. The decision is essentially irreversible
If one or more of the conditions is not met, the tactic of deferring commitment might not gain you anything (and could actually be costly, if it holds up development).
Conventional project planning, as practiced in enterpise today, tends to overemphasize the need for completeness in requirements-gathering. The completeness fetish leads to the Big Huge Requirements Document (or Big Huge RFP) Syndrome and can introduce unnecessary dependencies and brittleness into implementations.
There's a certain hubris associated with the notion that you can have a complete specification for something. You almost certainly can't. You almost certainly don't know your true needs ahead of rollout. True, some decisions have to be made in the absence of complete data (you don't always have the luxury of waiting for all the information to arrive), and there's the fact that you need to start somewhere even if you know that you "don't know what you're doing" yet. But that's not my real point. My real point is that too often we make decisions ahead of time (that we didn't really have to make, and later realize we shouldn't have made) based on the usually-false assumption that it's possible to know all requirements in advance.
What I'm suggesting, then, is that you reconsider whether it's always a good idea to strive for a complete specification before starting work on something. Accept the fact that you can't know everything in advance. Allow for "emergentcy." (Good decisions are often "emergent" in nature.) Reject the arrogant notion that with proper advance planning, you'll have a project that goes smoothly and results in a usable solution. Most of the time, from what I've seen, it doesn't work out that way. Not at all.
A military officer who was about to retire once reportedly said: "The most important thing I did in my career was to teach young leaders that whenever they saw a threat, their first job was to determine the timebox for their response. Their second job was to hold off making a decision until the end of the timebox, so that they could make it based on the best possible data."
This is an illustration of a principle that I think is (sadly) underutilized not only in R&D circles but in project planning generally, namely the principle of delaying decisions until the "last responsible moment" (championed by the Poppendiecks and others). The key intuition is that crucial decisions are best made when as much information as possible has been taken into account.
This is a good tactic when the following criteria are met:
1. Not all requirements for success are known in advance
2. The decision has huge downstream consequences
3. The decision is essentially irreversible
If one or more of the conditions is not met, the tactic of deferring commitment might not gain you anything (and could actually be costly, if it holds up development).
Conventional project planning, as practiced in enterpise today, tends to overemphasize the need for completeness in requirements-gathering. The completeness fetish leads to the Big Huge Requirements Document (or Big Huge RFP) Syndrome and can introduce unnecessary dependencies and brittleness into implementations.
There's a certain hubris associated with the notion that you can have a complete specification for something. You almost certainly can't. You almost certainly don't know your true needs ahead of rollout. True, some decisions have to be made in the absence of complete data (you don't always have the luxury of waiting for all the information to arrive), and there's the fact that you need to start somewhere even if you know that you "don't know what you're doing" yet. But that's not my real point. My real point is that too often we make decisions ahead of time (that we didn't really have to make, and later realize we shouldn't have made) based on the usually-false assumption that it's possible to know all requirements in advance.
What I'm suggesting, then, is that you reconsider whether it's always a good idea to strive for a complete specification before starting work on something. Accept the fact that you can't know everything in advance. Allow for "emergentcy." (Good decisions are often "emergent" in nature.) Reject the arrogant notion that with proper advance planning, you'll have a project that goes smoothly and results in a usable solution. Most of the time, from what I've seen, it doesn't work out that way. Not at all.
Friday, April 10, 2009
Most time spent in development is wasted
Yesterday, I was thinking about complexity in software systems and I had a kind of "Aha!" moment. It occurred to me that most of the programmer-hours time spent in product development are wasted.
We know that something like 30% to 40% (some experts say 45%) of the features in a software system are typically never used, while another 20% are rarely used. That means over half the code written for a product seldom, if ever, actually executes.
The irony here, if you think about it, is mindblowing. Software companies that are asking employees to turn their PCs off at night to save a few dollars on electricity are wasting huge dumpster-loads of cash every day to create code that'll never execute.
Is it worth creating the excess code? One could argue that it is, because there's always the chance someone will need to execute the unused bits, at some point in time. In fact, if you think about it, there are many things in this life that follow the pattern of "you seldom, if ever, need it, but when you need it, you really need it." Insurance, for example. Should we go through life uninsured just because we think we'll never experience disaster?
Unused software features are not like health insurance, though. They're more like teacup and soda straw insurance. Insurance at the granularity level of teacups is ridiculous (and in the aggregate could get quite expensive). But that's kind of the situation we're in with a lot of large, expensive software systems -- and a fair number of popular desktop programs, too (Photoshop, Acrobat Professional, OpenOffice being just three). You pay for a huge excess of features you'll never use.
There's no magic answer to the problem of "How do you know which features not to write?", obviously. It's a judgment call. But I think it's critical (for vendors, who need to cut costs, and customers, who are looking for less-expensive solutions to problems) to try to address the problem in a meaningful fashion.
What can be done? At least two things.
We know that formal requirements tend (pretty much universally) to err on the side of feature-richness, rather than leanness. It's important to address the problem early in the chain. Don't overspecify requirements. In software companies, product managers and others who drive requirements need to learn to think in terms of core use cases, and stop catering to every customer request for a specialized feature. There's a cost associated with implementing even the smallest new feature. Strive to hit the 80% use-case. Those are the scenarios (and customer needs) you can't afford to ignore.
If you're a software buyer, stop writing gargantuan RFPs. Again, figure out what your core use-cases are. You won't know what your real-world needs are until you've been in production a year. Don't try to anticipate every possible need in advance or insist on full generality. Stick with hard-core business requirements, because your odds of getting that right are small enough as it is.
Another approach to take is to insist on modular design. Factor out minority functionalities in such a way that they can easily be "added back in" later through snap-ins or extra modules. Create a framework. Create services. Then compose an application.
Product managers: Quit listening to every ridiculous feature request from the field. Don't drive needless features into a product because one customer asked for this or that edge-case to be supported. Make it easy for customers and SEs to build (and share) their own add-ons instead.
Infrequently executed "baggage code" is costly -- for everyone. Let's stop demanding it.
We know that something like 30% to 40% (some experts say 45%) of the features in a software system are typically never used, while another 20% are rarely used. That means over half the code written for a product seldom, if ever, actually executes.
The irony here, if you think about it, is mindblowing. Software companies that are asking employees to turn their PCs off at night to save a few dollars on electricity are wasting huge dumpster-loads of cash every day to create code that'll never execute.
Is it worth creating the excess code? One could argue that it is, because there's always the chance someone will need to execute the unused bits, at some point in time. In fact, if you think about it, there are many things in this life that follow the pattern of "you seldom, if ever, need it, but when you need it, you really need it." Insurance, for example. Should we go through life uninsured just because we think we'll never experience disaster?
Unused software features are not like health insurance, though. They're more like teacup and soda straw insurance. Insurance at the granularity level of teacups is ridiculous (and in the aggregate could get quite expensive). But that's kind of the situation we're in with a lot of large, expensive software systems -- and a fair number of popular desktop programs, too (Photoshop, Acrobat Professional, OpenOffice being just three). You pay for a huge excess of features you'll never use.
There's no magic answer to the problem of "How do you know which features not to write?", obviously. It's a judgment call. But I think it's critical (for vendors, who need to cut costs, and customers, who are looking for less-expensive solutions to problems) to try to address the problem in a meaningful fashion.
What can be done? At least two things.
We know that formal requirements tend (pretty much universally) to err on the side of feature-richness, rather than leanness. It's important to address the problem early in the chain. Don't overspecify requirements. In software companies, product managers and others who drive requirements need to learn to think in terms of core use cases, and stop catering to every customer request for a specialized feature. There's a cost associated with implementing even the smallest new feature. Strive to hit the 80% use-case. Those are the scenarios (and customer needs) you can't afford to ignore.
If you're a software buyer, stop writing gargantuan RFPs. Again, figure out what your core use-cases are. You won't know what your real-world needs are until you've been in production a year. Don't try to anticipate every possible need in advance or insist on full generality. Stick with hard-core business requirements, because your odds of getting that right are small enough as it is.
Another approach to take is to insist on modular design. Factor out minority functionalities in such a way that they can easily be "added back in" later through snap-ins or extra modules. Create a framework. Create services. Then compose an application.
Product managers: Quit listening to every ridiculous feature request from the field. Don't drive needless features into a product because one customer asked for this or that edge-case to be supported. Make it easy for customers and SEs to build (and share) their own add-ons instead.
Infrequently executed "baggage code" is costly -- for everyone. Let's stop demanding it.
Thursday, April 09, 2009
Why is everything being declared Dead?
Why is everything in technology being declared dead these days?
The Burton Group got huge PR mileage last January when one of its 12 vice presidents smugly declared "SOA Is Dead." Bell-clangers throughout the blogosphere latched onto it immediately as if John Lennon had come back to life as an IT savant.
The only problem with the Burton VP's oh-so-keenly-insightful declaration is that it's not original. David Chappell made the same declaration in August 2008 at TechReady7, Microsoft's semi-annual internal technical conference in Seattle.
But it turns out Hurwitz & Associates made the claim in October 2007.
And Jeff Nolan of Venture Chronicles declared "SOA Is Dead" in a blog back in April 2006.
All of which led Robin Bloor to declare recently: "The People Who Think SOA is Dead, Are Dead."
Of course, SOA isn't the only thing that's dead. Other recent death sentences include:
Web Services are dead
SOAP is dead
Web Content Management is dead
Cloud computing is dead
JSR process is dead
Java itself is dead
IT is dead
It seems to me that declarations of this sort are the kind of thing a publicity-grabbing publicity grabber does to grab publicity.
I think the only thing that's dead is imagination and originality on the part of certain analysts, journalists, and industry figures who, unable to think of something more meaningful to talk about in speeches and blogs, take cheap shots at technologies and processes that are still useful, still used every day, and (ultimately) still quite able to fog a mirror.
What do you think?
The Burton Group got huge PR mileage last January when one of its 12 vice presidents smugly declared "SOA Is Dead." Bell-clangers throughout the blogosphere latched onto it immediately as if John Lennon had come back to life as an IT savant.
The only problem with the Burton VP's oh-so-keenly-insightful declaration is that it's not original. David Chappell made the same declaration in August 2008 at TechReady7, Microsoft's semi-annual internal technical conference in Seattle.
But it turns out Hurwitz & Associates made the claim in October 2007.
And Jeff Nolan of Venture Chronicles declared "SOA Is Dead" in a blog back in April 2006.
All of which led Robin Bloor to declare recently: "The People Who Think SOA is Dead, Are Dead."
Of course, SOA isn't the only thing that's dead. Other recent death sentences include:
Web Services are dead
SOAP is dead
Web Content Management is dead
Cloud computing is dead
JSR process is dead
Java itself is dead
IT is dead
It seems to me that declarations of this sort are the kind of thing a publicity-grabbing publicity grabber does to grab publicity.
I think the only thing that's dead is imagination and originality on the part of certain analysts, journalists, and industry figures who, unable to think of something more meaningful to talk about in speeches and blogs, take cheap shots at technologies and processes that are still useful, still used every day, and (ultimately) still quite able to fog a mirror.
What do you think?
Wednesday, April 08, 2009
Swing versus death by paper cut
The other night, I was looking at the JSR-296 Swing Application Framework prototype implementation, which is (according to the landing page) "a small set of Java classes that simplify building desktop applications." What made me smile is the statement (on that same landing page): "The intended audience for this snapshot is experienced Swing developers with a moderately high tolerance for pain. "
When I tweeted this, Gil Hova tweeted back: "Wait. There are Swing developers with low tolerances for pain?"
I laughed so hard I almost blew coffee out my nose. (Now that's taking Java seriously.)
Before going any further, I should tell you that the Swing Application Framework appears to be dead (the JSR is marked Inactive), with the most recent build carrying a date of 19 October 2007. It was supposed to go into Java SE 7. But it now seems to be in a kind of limbo.
But in case you were wondering what, exactly, the Swing App Framework is designed to let you do, here's the Hello World example cited by the creators.
If you run the foregoing code, you get:
Yes, it's an ugly large-type edition of browser-JavaScript's window.alert( ). Except it takes 20 lines of code instead of one.
This snippet illustrates a scant handful of the many annoyances that make Swing programming feel so much like death by a thousand paper cuts. For example, it shows the repetitive boilerplate code Swing programmers are forced to write every time something as common as a JFrame is needed. The setLocationRelativeTo(null), setVisible(true), the ever-ridiculous pack(), all are needless mumbo jumbo. Get rid of them! Roll them up out of view. Make them default behaviors. If I want to override these things, let me. But nine times out of ten, when I create a JFrame, I do, in fact, want it to be centered onscreen; I want it to be visible; I want it to go away when dismissed (and be garbage collected); and I don't want to have to recite pack() ever again in my lifetime.
A library that makes programmers write boilerplate is lame. It violates a basic principle of good API design, which is that any code that can be hidden from the programmer should be hidden. (See slide 28 of Joshua Bloch's excellent slideshow.) Not giving things reasonable default values is, likewise, a sin.
There's something else here that rubs me the wrong way, which is that if you're creating a new API (or framework, in this case) to supplement an existing API, it seems to me you shouldn't use that as an opportunity to introduce additional language syntax. In other words, don't introduce annotations if the underlying API doesn't use them. Keep it simple. Streamline. Simplify.
But enough ranting. On balance, I think the Swing App Framework is a good idea and adds value, and I think something like it should go into Java SE 7, because although it doesn't make writing JFrame code any less annoying, it does provide a host of application services that would otherwise require Swing programmers write tons and tons of really tedious code. Anything that reduces that tonnage is good, I say.
When I tweeted this, Gil Hova tweeted back: "Wait. There are Swing developers with low tolerances for pain?"
I laughed so hard I almost blew coffee out my nose. (Now that's taking Java seriously.)
Before going any further, I should tell you that the Swing Application Framework appears to be dead (the JSR is marked Inactive), with the most recent build carrying a date of 19 October 2007. It was supposed to go into Java SE 7. But it now seems to be in a kind of limbo.
But in case you were wondering what, exactly, the Swing App Framework is designed to let you do, here's the Hello World example cited by the creators.
public class ApplicationExample1 extends Application {I'm sure there's a lot of goodness packed() away somewhere in the bowels of the SAF API, but it sure isn't showing up in this Hello World code.
JFrame mainFrame = null;
@Override protected void startup(String[] ignoreArgs) {
JLabel label = new JLabel("Hello World", JLabel.CENTER);
label.setFont(new Font("LucidaSans", Font.PLAIN, 32));
mainFrame = new JFrame(" Hello World ");
mainFrame.add(label, BorderLayout.CENTER);
mainFrame.addWindowListener(new MainFrameListener());
mainFrame.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE);
mainFrame.pack();
mainFrame.setLocationRelativeTo(null); // center the window
mainFrame.setVisible(true);
}
private class MainFrameListener extends WindowAdapter {
public void windowClosing(WindowEvent e) {
exit(e);
}
}
public static void main(String[] args) {
launch(ApplicationExample1.class, args);
}
}
If you run the foregoing code, you get:
Yes, it's an ugly large-type edition of browser-JavaScript's window.alert( ). Except it takes 20 lines of code instead of one.
This snippet illustrates a scant handful of the many annoyances that make Swing programming feel so much like death by a thousand paper cuts. For example, it shows the repetitive boilerplate code Swing programmers are forced to write every time something as common as a JFrame is needed. The setLocationRelativeTo(null), setVisible(true), the ever-ridiculous pack(), all are needless mumbo jumbo. Get rid of them! Roll them up out of view. Make them default behaviors. If I want to override these things, let me. But nine times out of ten, when I create a JFrame, I do, in fact, want it to be centered onscreen; I want it to be visible; I want it to go away when dismissed (and be garbage collected); and I don't want to have to recite pack() ever again in my lifetime.
A library that makes programmers write boilerplate is lame. It violates a basic principle of good API design, which is that any code that can be hidden from the programmer should be hidden. (See slide 28 of Joshua Bloch's excellent slideshow.) Not giving things reasonable default values is, likewise, a sin.
There's something else here that rubs me the wrong way, which is that if you're creating a new API (or framework, in this case) to supplement an existing API, it seems to me you shouldn't use that as an opportunity to introduce additional language syntax. In other words, don't introduce annotations if the underlying API doesn't use them. Keep it simple. Streamline. Simplify.
But enough ranting. On balance, I think the Swing App Framework is a good idea and adds value, and I think something like it should go into Java SE 7, because although it doesn't make writing JFrame code any less annoying, it does provide a host of application services that would otherwise require Swing programmers write tons and tons of really tedious code. Anything that reduces that tonnage is good, I say.
Tuesday, April 07, 2009
Turn off your step-thru debugger
Years ago, when I was first learning to program, I ran into a problem with some code I was writing, and I asked my mentor (an extraordinarily gifted coder) for some help. He listened as I described the problem. I told him all the things I had tried so far. At that time, I was quite enamored of the Think C development environment for the Mac. It had a fine step-thru debugger, which I was quite reliant on.
My mentor suggested a couple more approaches to try (and when I tried them, they worked, of course). Then he made a remark that has stayed with me ever since.
"I try to stay away from debuggers," he said. "A debugger is a crutch. You're better off without it."
I was speechless with astonishment. Here was someone who wrote massive quantities of Pascal and assembly for a wide variety of platforms -- and he never used a debugger! I couldn't have been more shocked if he told me he had perfected cold fusion.
"If you get in the habit of using a debugger," my mentor pointed out, "you'll get lazy. A certain part of your brain shuts off, because you expect the debugger to help you find the bug. But in reality, you wrote the bug, and you should be able to find it."
Still stunned, I asked: "What do you do when you have a really nasty bug?"
He said something I'll never forget. "I make the machine tell me where it is."
Make the machine tell you where the bug is. What a wonderful piece of advice. It's the essence of troubleshooting, whether you're trying to fix a car that won't start, trace an electrical fault, or debug a piece of software.
My friend (who did a lot of "realtime" programming in assembly, among other things) pointed out to me that there are many programming scenarios in which it's impossible to run a debugger anyway.
I took my mentor's advice and stopped using a step-through debugger. The only real debugger I continued to use (at that time) was Macsbug, which I occasionally invoked in order to inspect the heap or see what was going on in a stack frame.
Sure enough, I found that once I stopped using a step-thru debugger, my coding (and troubleshooting) skills improved rapidly. I spent less time in "endless loops" (fruitless troubleshooting sessions) and got to the source of problems quicker. I learned a lot about my own bad coding habits and developed a renewed appreciation for the importance of understanding a language at a level deeper than surface-syntax.
To this day, I avoid step-thru debugging, and find myself better off for it.
If you do a lot of step-thru debugging, try this as an exercise. For the next month, don't use a debugger. See if you can walk without crutches for a change. I'm betting you'll lose the limp in no time.
My mentor suggested a couple more approaches to try (and when I tried them, they worked, of course). Then he made a remark that has stayed with me ever since.
"I try to stay away from debuggers," he said. "A debugger is a crutch. You're better off without it."
I was speechless with astonishment. Here was someone who wrote massive quantities of Pascal and assembly for a wide variety of platforms -- and he never used a debugger! I couldn't have been more shocked if he told me he had perfected cold fusion.
"If you get in the habit of using a debugger," my mentor pointed out, "you'll get lazy. A certain part of your brain shuts off, because you expect the debugger to help you find the bug. But in reality, you wrote the bug, and you should be able to find it."
Still stunned, I asked: "What do you do when you have a really nasty bug?"
He said something I'll never forget. "I make the machine tell me where it is."
Make the machine tell you where the bug is. What a wonderful piece of advice. It's the essence of troubleshooting, whether you're trying to fix a car that won't start, trace an electrical fault, or debug a piece of software.
My friend (who did a lot of "realtime" programming in assembly, among other things) pointed out to me that there are many programming scenarios in which it's impossible to run a debugger anyway.
I took my mentor's advice and stopped using a step-through debugger. The only real debugger I continued to use (at that time) was Macsbug, which I occasionally invoked in order to inspect the heap or see what was going on in a stack frame.
Sure enough, I found that once I stopped using a step-thru debugger, my coding (and troubleshooting) skills improved rapidly. I spent less time in "endless loops" (fruitless troubleshooting sessions) and got to the source of problems quicker. I learned a lot about my own bad coding habits and developed a renewed appreciation for the importance of understanding a language at a level deeper than surface-syntax.
To this day, I avoid step-thru debugging, and find myself better off for it.
If you do a lot of step-thru debugging, try this as an exercise. For the next month, don't use a debugger. See if you can walk without crutches for a change. I'm betting you'll lose the limp in no time.
Monday, April 06, 2009
Should you cater to younger workers?
At the recent AIIM show in Philadelphia, there was a session called "Stump the Consultant" in which audience members got to put their toughest questions to a panel of three experts (Jesse Wilkins of Access Sciences, Lisa Welchman of WelchmanPierpoint, and my esteemed colleague Alan Pelz-Sharpe of CMS Watch). There were approximately 30 questions from 80 audience members (a very high rate of participation).
One of the questions was quite interesting, and it drew an interesting response.
The question came from someone working for an organization with two sizable constituencies of highly educated domain experts. (I'm being a bit vague, deliberately.) The organization's content-management infrastructure, the questioner said, was practically nonexistent, with many users still accessing content via very old-fashioned tools. There's an urgent need to overhaul the system and put some semblance of a "real" ECM solution in place. But there are two groups of users to satisfy: Senior domain specialists (older workers) who are comfortable with the old-fashioned tools and don't want to change; and younger workers with a strong preference for modern, browser-based apps. The question is, which group do you try to please? Which group can you least afford to alienate?
If you cater to the younger group, you risk alienating your most senior people (talented, expensive, hard-to-replace experts; people you don't want to lose to the competition; people with great political capital in the organization, who can perhaps defeat an IT initiative by pushing back hard). On the other hand, if you cater to the older group, you risk alienating the younger workers; and you risk keeping obsolete systems in place far longer than you should, making future replacement that much more difficult while also impeding business objectives, etc.
Lisa Welchman gave what I thought was a poignant and insightful answer. I'll try to paraphrase: She said, in essence, that if you're wise, you'll put a new system in place that serves the needs of all, but serves the wants of the younger generation of workers. And yes, you do this even though you know it will bring pushback from the more senior workers.
Lisa explained (in a much more articulate way than I can manage here) that older workers are less likely to quit their jobs than younger workers. They may grouse and grumble over a new system, but most will stay in their jobs rather than leave.
Younger workers, on the other hand, are more mobile and more inclined to go off on their own and find another job (or start a company) when conditions become frustrating. The older workers will retire; you'll eventually lose them anyway, no matter what system you put in place (or don't put in place). But if you fail to attract and nurture a talented, motivated corps of younger workers, the future of the company is put at risk.
So you do the right thing for the business. You put in a new system. One that will (hopefully) meet your current and future business needs while also satisfying as many users as possible. And if you have to choose between satisfying senior personnel versus generation-next, again you do the right thing for the business: You go with generation-next.
Lisa's answer resonated with me. It seemed to resonate, also, with the audience of 80 or so people. From my seat near the front of the room, I turned around and surveyed the tableau of faces. The majority of people looked to be over the age of 40. Everyone seemed to get it. Everybody seemed to understand that a company's best investment is not in its IT, but in its people; and not just in its older, more experienced workers, but in its older-workers-to-be. One thing you can't do is cater to workers who want to cling to the ways of the past, no matter how senior or how influential they may be.
As it turns out, I was only able to attend one session at this year's AIIM Expo (because I was working the CMS Watch booth the rest of the time). I'm glad it was this one.
One of the questions was quite interesting, and it drew an interesting response.
The question came from someone working for an organization with two sizable constituencies of highly educated domain experts. (I'm being a bit vague, deliberately.) The organization's content-management infrastructure, the questioner said, was practically nonexistent, with many users still accessing content via very old-fashioned tools. There's an urgent need to overhaul the system and put some semblance of a "real" ECM solution in place. But there are two groups of users to satisfy: Senior domain specialists (older workers) who are comfortable with the old-fashioned tools and don't want to change; and younger workers with a strong preference for modern, browser-based apps. The question is, which group do you try to please? Which group can you least afford to alienate?
If you cater to the younger group, you risk alienating your most senior people (talented, expensive, hard-to-replace experts; people you don't want to lose to the competition; people with great political capital in the organization, who can perhaps defeat an IT initiative by pushing back hard). On the other hand, if you cater to the older group, you risk alienating the younger workers; and you risk keeping obsolete systems in place far longer than you should, making future replacement that much more difficult while also impeding business objectives, etc.
Lisa Welchman gave what I thought was a poignant and insightful answer. I'll try to paraphrase: She said, in essence, that if you're wise, you'll put a new system in place that serves the needs of all, but serves the wants of the younger generation of workers. And yes, you do this even though you know it will bring pushback from the more senior workers.
Lisa explained (in a much more articulate way than I can manage here) that older workers are less likely to quit their jobs than younger workers. They may grouse and grumble over a new system, but most will stay in their jobs rather than leave.
Younger workers, on the other hand, are more mobile and more inclined to go off on their own and find another job (or start a company) when conditions become frustrating. The older workers will retire; you'll eventually lose them anyway, no matter what system you put in place (or don't put in place). But if you fail to attract and nurture a talented, motivated corps of younger workers, the future of the company is put at risk.
So you do the right thing for the business. You put in a new system. One that will (hopefully) meet your current and future business needs while also satisfying as many users as possible. And if you have to choose between satisfying senior personnel versus generation-next, again you do the right thing for the business: You go with generation-next.
Lisa's answer resonated with me. It seemed to resonate, also, with the audience of 80 or so people. From my seat near the front of the room, I turned around and surveyed the tableau of faces. The majority of people looked to be over the age of 40. Everyone seemed to get it. Everybody seemed to understand that a company's best investment is not in its IT, but in its people; and not just in its older, more experienced workers, but in its older-workers-to-be. One thing you can't do is cater to workers who want to cling to the ways of the past, no matter how senior or how influential they may be.
As it turns out, I was only able to attend one session at this year's AIIM Expo (because I was working the CMS Watch booth the rest of the time). I'm glad it was this one.
Sunday, April 05, 2009
What Sun means to IBM
Sun Microsystems profit centers (from SEC filings)
Like a lot of my friends, I've been trying to figure out why the heck IBM would want to buy a burnt-out fail-whale like Sun Microsystems. Yes yes, Sun has some remarkably good technology, and I'm not putting it down. Sun's problem has never really been a lack of good technology. The company's problem has been a failure to monetize the technology. Big difference.
Sun's biggest problem at the moment (arguably) is brand deterioration. There's an odor of failure about the company, and it's a difficult odor to get rid of. It eventually taints the brand itself. I fear that's happened already with Sun.
I spent a lunch hour on the phone the other day with a friend of mine who works for a very large company that competes with Sun in a number of important markets. We tried to think of reasons for IBM to buy Sun, and couldn't come up with many.
- Storage + cloud-computing story: IBM doesn't need one.
- Servers and chipsets ("Computer Systems Products"): IBM doesn't need more of.
- Operating system (Solaris): IBM has shown that it doesn't want to be in the OS business.
- Java: The platform itself doesn't make huge money for Sun (if it did, Sun wouldn't be for sale), and IBM would probably throw it over the wall to the community (for real, and in toto) rather than try to maintain and advance it internally. If IBM didn't give Java to the community, there could be antitrust implications (since so many of IBM's competitors rely so heavily on Java).
- Software: Sun middleware is so profitable it's not even a line item in the Annual Report. (Okay, that was unnecessarily sarcastic.) Sun middleware is not category-leading in any category I'm aware of. MySQL is interesting, but does IBM need a database? More to the point, is the income MySQL produces important to IBM? Is it important to the overall Sun deal?
Three things, I think. First, a customer list to sell into (for servers, storage, cloud services). That's the obvious one.
The second thing IBM gets by buying Sun (something I don't see many people talking about) is that nobody else gets to buy Sun. Certain IBM competitors who really do stand to benefit from a Sun purchase (e.g., Cisco) are denied easy entry into some of IBM's markets, if Big Blue takes Sun out.
A third thing IBM gets is 7000 patents. Not all of those patents are still active, and around 1600 were donated to open source a few years ago. But it's still a sizable portfolio. And we do know that IBM likes patents an awful lot.
Sadly, one thing IBM does not need, that Sun has way too many of, is employees. I see lots of unemployment coming out of this acquisition (if indeed it comes to pass).
A prediction: I think IBM will buy Sun, but people may be surprised at the low valuation of Sun. I also think Google will buy Twitter, and people will be surprised at the high valuation of Twitter. Sun, I fear, may turn out to be worth only a few Twitters.
And wouldn't that be something to tweet.
UPDATE: Late Sunday, the New York Times reported that talks between IBM and Sun had broken off. The deal is officially dead (for now). Neither party has indicated a willingness to continue negotiations. Where Sun goes from here is anyone's guess.
Saturday, April 04, 2009
Hell freezes over as big ECM vendors suddenly embrace interoperability
Jeff Potts at ecmaarchitect.com has written an interesting post on the flurry of interest around the Content Management Interoperability Services (CMIS) standard, which was very much in evidence at the recent AIIM show. I was at the show, and I too detected a huge amount of interest around the new standard.
But it's not a standard yet (and won't be, until the end of this calendar year at the very earliest), which makes the sudden interest in it rather unusual, to say the least. I have seen a lot of industry standards come and go over the past 20 years. But I have seldom seen as much interest in a not-yet-released standard as is happening now with CMIS.
What strikes me as particularly odd is the huge interest in CMIS on the part of big ECM vendors like Open Text, EMC (Documentum), Microsoft, IBM, and Oracle, to name a few. Actually, IBM and Oracle don't surprise me very much, since they're pro-standards in general. But some of the other big players built their businesses on proprietary, standards-averse lock-in-ware. To go from a lock-in model to a posture of "let's stand up in public and salute the interoperability flag" seems downright weird to me.
I have it on good authority that Microsoft is a particularly enthusiastic proponent of CMIS, which is even queerer, to me. This is a company that has done more (over the years) to oppose interoperability than any software company in existence. For them to be the out-front cheerleader on CMIS blows my mind (or what's left of it at this point).
What's super-weird, also, is the fact that almost all of the big companies pushing CMIS are involved in the JSR-283 (JCR 2) effort, which produced a final draft spec the other day. If you look at the Expert Committee members on the project page for JSR-283 (scroll down to see the names), you'll see EMC, IBM, and most of the CMIS cheerleaders listed (except Microsoft).
The big CMIS supporters have "supported" JSR-170 and JSR-283 all along, but never once showed the kind of enthusiasm for those JSRs that they are now showing for CMIS. Those companies could have issued press releases, given seminars at AIIM, etc., in support of JCR, but never did. Somehow, interoperability (which is what JCR was and is about) wasn't important to these big ECM companies when JSR-170 was ratified. But now it is. And CMIS is a long way from ratified.
Does anyone else see anything strange in this picture, or is it just me? Mind you, I'm all for interoperability and I'm all for CMIS. I'm just struggling to understand why the sudden interest in interoperability on the part of companies who didn't give a damn 5 years ago.
But it's not a standard yet (and won't be, until the end of this calendar year at the very earliest), which makes the sudden interest in it rather unusual, to say the least. I have seen a lot of industry standards come and go over the past 20 years. But I have seldom seen as much interest in a not-yet-released standard as is happening now with CMIS.
What strikes me as particularly odd is the huge interest in CMIS on the part of big ECM vendors like Open Text, EMC (Documentum), Microsoft, IBM, and Oracle, to name a few. Actually, IBM and Oracle don't surprise me very much, since they're pro-standards in general. But some of the other big players built their businesses on proprietary, standards-averse lock-in-ware. To go from a lock-in model to a posture of "let's stand up in public and salute the interoperability flag" seems downright weird to me.
I have it on good authority that Microsoft is a particularly enthusiastic proponent of CMIS, which is even queerer, to me. This is a company that has done more (over the years) to oppose interoperability than any software company in existence. For them to be the out-front cheerleader on CMIS blows my mind (or what's left of it at this point).
What's super-weird, also, is the fact that almost all of the big companies pushing CMIS are involved in the JSR-283 (JCR 2) effort, which produced a final draft spec the other day. If you look at the Expert Committee members on the project page for JSR-283 (scroll down to see the names), you'll see EMC, IBM, and most of the CMIS cheerleaders listed (except Microsoft).
The big CMIS supporters have "supported" JSR-170 and JSR-283 all along, but never once showed the kind of enthusiasm for those JSRs that they are now showing for CMIS. Those companies could have issued press releases, given seminars at AIIM, etc., in support of JCR, but never did. Somehow, interoperability (which is what JCR was and is about) wasn't important to these big ECM companies when JSR-170 was ratified. But now it is. And CMIS is a long way from ratified.
Does anyone else see anything strange in this picture, or is it just me? Mind you, I'm all for interoperability and I'm all for CMIS. I'm just struggling to understand why the sudden interest in interoperability on the part of companies who didn't give a damn 5 years ago.
Friday, April 03, 2009
Dot-NET to benefit from Sun sale?
At the AIIM show this week, I talked to a number of consultants and others who told tales of an uptick, recently, in .NET-based CMS business. One potential buyer wanted to know who the top .NET players in ECM are. There seemed to be a lot of interest, generally, in .NET-based content management. I can confirm that one software vendor whose .NET CMS has been around for years has been experiencing strong business in the recession.
It occurred to me that the much-talked-about (but slow to happen) acquisition of Sun by IBM, combined with the increasing entropy level around Java 7, may be giving IT decisionmakers a bit of stomach acid right now. Smaller shops with a significant existing investment in Microsoft infrastructure seem to see this as a good time to stop "thinking in Java." One consultant told me that a recent customer went with a .NET system based on the ability to get a usable delpoyment up and running quickly. The unspoken sentiment seemed to be "Who has time for Java EE?"
Bottom line: Acquisitions are disruptive. They put fence-sitters back on the fence, and they may others jump in unexpected directions. With Java, you also have uncertainty around the next edition. (Will there be a Java 7 any time soon? Doubtful.) These are not good things if you're Sun. But it's not a bad environment for Microsoft. Not bad at all.
It occurred to me that the much-talked-about (but slow to happen) acquisition of Sun by IBM, combined with the increasing entropy level around Java 7, may be giving IT decisionmakers a bit of stomach acid right now. Smaller shops with a significant existing investment in Microsoft infrastructure seem to see this as a good time to stop "thinking in Java." One consultant told me that a recent customer went with a .NET system based on the ability to get a usable delpoyment up and running quickly. The unspoken sentiment seemed to be "Who has time for Java EE?"
Bottom line: Acquisitions are disruptive. They put fence-sitters back on the fence, and they may others jump in unexpected directions. With Java, you also have uncertainty around the next edition. (Will there be a Java 7 any time soon? Doubtful.) These are not good things if you're Sun. But it's not a bad environment for Microsoft. Not bad at all.
Subscribe to:
Posts (Atom)