At last month's OOPSLA 2008, there was an interesting presentation by Michael D. Bond on a technology called Melt, which aims to prevent out-of-memory errors in Java programs that harbor memory leaks (which is to say, 99 percent of large Java programs). The Intel-funded research paper, Tolerating Memory Leaks (by Bond and his thesis advisor, Kathryn S. McKinley, U. Texas at Austin), is well worth reading.
The key intuition is that reachability is an over-approximation of liveness, and thus if you can identify objects that are (by dint of infrequent use) putative orphans, you can move those orphan objects to disk and stop trying to garbage-collect them, thereby freeing up heap space and relieving the collector of unnecessary work. If the running program later tries to access the orphaned object, you bring it back to life. All of this is done at a very low level so that neither the garbage collector nor the running program knows that anything special is going on.
Melt's staleness-tracking logic and read-blockers don't actually become activated until the running application is approaching memory exhaustion, defined (arbitrarily) as 80-percent heap fullness. Rather than letting the program get really close to memory exhaustion (which causes garbage collection to become so frequent that the program seems to grind to a halt), stale objects are moved to disk so that the running app doesn't slow down.
Purists will complain that sweeping memory leaks under the carpet like this is no substitute for actually fixing the leaks. In very large programs, however, it can be impractical to find and fix all memory leaks. (I question whether it's even provably possible to do so.) And even if you could find and fix all potential leaks in your program, what about the JRE? (Does it never leak?) What about external libraries? Are you going to go on a quest to fix other people's leaks? How will you know when you've found them all?
I believe in fixing memory leaks. But I'm also a pragmatist, and I think if your app is mission-critical, it can't hurt to have a safety net under it; and Melt is that safety net.
Good work, Michael.
Wednesday, November 05, 2008
Tuesday, November 04, 2008
Garbage-collection bug causes car crash

A few days ago I speculated that you could lose an expensive piece of hardware (such as a $300 million spacecraft) if a non-deterministic garbage-collection event were to happen at the wrong time.
It turns out there has indeed been a GC-related calamity: one in which $2 million was on the line. (To be fair, this particular calamity wasn't actually caused by garbage collection; it was caused by programmer insanity. But it makes for an interesting story nevertheless. Read on.)
The event in question involved a driverless vehicle (shown above) powered by 10K lines of C# code.
At codeproject.com, you'll find the in-depth post-mortem discussion of how a GC-related bug caused a driverless DARPA Grand Challenge vehicle to crash in the middle of a contest, eliminating the Princeton team from competition and dashing their hopes of winning a $2 million cash prize.
The vehicle had been behaving erratically on trial runs. A member of the team recalls: "Sitting in a McDonald's the night before the competition, we still didn't know why the computer kept dying a slow death. Because we didn't know why this problem kept appearing at 40 minutes, we decided to set a timer. After 40 minutes, we would stop the car and reboot the computer to restore the performance."
The team member described the computer-vision logic: "As the car moves, we call an update function on each of the obstacles that we know about, to update their position in relation to the car. Obviously, once we pass an obstacle, we don't need keep it in memory, so everything 10 feet behind the car got deleted."
"On race day, we set the timer and off she went for a brilliant 9.8 mile drive. Unfortunately, our system was seeing and cataloging every bit of tumbleweed and scrub that it could find along the side of the road. Seeing far more obstacles than we'd ever seen in our controlled tests, the list blew up faster than expected and the computers died only 28 minutes in, ending our run."
The vehicle ran off the road and crashed.
The problem? Heap exhaustion. Objects that should have been garbage-collected weren't. Even though delete was being called on all "rear-view mirror" objects, those objects were still registered as subscribers to a particular kind of event. Hence they were never released, and the garbage collector passed them by.
In Java, you could try the tactic of making rear-view-mirror objects weakly reachable, but eventually you're bound to drive the car onto a shiny, pebble-covered beach or some other kind of terrain that causes new objects to be created faster than they can possibly be garbage-collected, and then you're back to the same problem as before. (There are lots of ways out of this dilemma. Obviously, the students were trying a naive approach for simplicity's sake. Even so, had they not made the mistake of keeping objects bound to event listeners, their naive approach no doubt would have been good enough.)
As I said, this wasn't really a GC-caused accident. It was caused by programmer error. Nevertheless, it's the kind of thing that makes you stop and think.
Monday, November 03, 2008
Why 64-bit Java is slow
In an interesting post at the WebSphere Community Blog, Andrew Spyker explains why it is that when you switch from 32-bit Java to a 64-bit runtime environment, you typically see speed go down 15 percent and memory consumption go up by around 50 percent. The latter is explained by the fact that addresses are simply bigger in 64-bit-land, and complex data structures use a lot of 64-bit values even if they only need 32-bit values. The reason performance drops is because although address width has gotten bigger, processor memory caches have not got bigger in terms of overall Kbytes available. Thus, you are bound to see things drop out of L1 and L2 cache more often. Hence cache misses go up and speed goes down.
Why, then, would anyone invest in 64-bit machines if the 64-bit JVM is going to give you an immediate performance hit? The answer is simple. The main reason you go with 64-bit architecture is to address a larger memory space (and flow more bytes through the data bus). In other word, if you're running heap-intensive apps, you have a lot to gain by going 64-bit. If you have an app that needs more than around 1.5 GB of RAM, you have no choice.
Why 1.5GB? It might actually be less than that. On a 4GB Win machine, the OS hogs 2GB of RAM and will only let applications have 2GB. The JVM, of course, needs its own RAM. And then there's the heap space within the JVM; that's what your app uses. It turns out that the JVM heap has to be contiguous (for reasons related to garbage collection). The largest piece of contiguous heap you can get, after the JVM loads (and taking into account all the garbage that has to run in the background in order to make Windows work), is between 1.2GB and 1.8 GB (roughly) depending on the circumstances.
To get more heap than that means either moving to a 64-bit JVM or using Terracotta. The latter (if you haven't heard of it) is a shared-memory JVM clustering technology that essentially gives you unlimited heap space. Or should I say, heap space is limited only by the amount of disk space. Terracotta pages out to disk as necessary. A good explanation of how that works is given here.
But getting back to the 64-bit-memory consumption issue: This issue (of RAM requirements for ordinary Java apps increasing dramatically when you run them on a 64-bit machine) is a huge problem, potentially, for hosting services that run many instances of Java apps for SaaS customers, because it means your scale-out costs rise much faster than they should. But it turns out there are things you can do. IBM, in its JVM, uses a clever pointer-compression scheme to (in essence) make good use of unused high-order bits in a 64-bit machine. The result? Performance is within 5 percent of 32-bit and RAM growth is only 3 percent. Graphs here.
Oracle has a similar trick for BEA's JRockit JVM, and Sun is just now testing a new feature called Compressed oops (ordinary object pointers). The latter is supposedly included in a special JDK 6 "performance release" (survey required). You have to use special command-line options to get the new features to work, however.
Anyway, now you know why 64-bit Java can be slow and piggish. Everything's fatter in 64-bit-land.
For information about large-memory support in Windows, see this article at support.microsoft.com. Also consult this post at sinewalker.
Why, then, would anyone invest in 64-bit machines if the 64-bit JVM is going to give you an immediate performance hit? The answer is simple. The main reason you go with 64-bit architecture is to address a larger memory space (and flow more bytes through the data bus). In other word, if you're running heap-intensive apps, you have a lot to gain by going 64-bit. If you have an app that needs more than around 1.5 GB of RAM, you have no choice.
Why 1.5GB? It might actually be less than that. On a 4GB Win machine, the OS hogs 2GB of RAM and will only let applications have 2GB. The JVM, of course, needs its own RAM. And then there's the heap space within the JVM; that's what your app uses. It turns out that the JVM heap has to be contiguous (for reasons related to garbage collection). The largest piece of contiguous heap you can get, after the JVM loads (and taking into account all the garbage that has to run in the background in order to make Windows work), is between 1.2GB and 1.8 GB (roughly) depending on the circumstances.
To get more heap than that means either moving to a 64-bit JVM or using Terracotta. The latter (if you haven't heard of it) is a shared-memory JVM clustering technology that essentially gives you unlimited heap space. Or should I say, heap space is limited only by the amount of disk space. Terracotta pages out to disk as necessary. A good explanation of how that works is given here.
But getting back to the 64-bit-memory consumption issue: This issue (of RAM requirements for ordinary Java apps increasing dramatically when you run them on a 64-bit machine) is a huge problem, potentially, for hosting services that run many instances of Java apps for SaaS customers, because it means your scale-out costs rise much faster than they should. But it turns out there are things you can do. IBM, in its JVM, uses a clever pointer-compression scheme to (in essence) make good use of unused high-order bits in a 64-bit machine. The result? Performance is within 5 percent of 32-bit and RAM growth is only 3 percent. Graphs here.
Oracle has a similar trick for BEA's JRockit JVM, and Sun is just now testing a new feature called Compressed oops (ordinary object pointers). The latter is supposedly included in a special JDK 6 "performance release" (survey required). You have to use special command-line options to get the new features to work, however.
Anyway, now you know why 64-bit Java can be slow and piggish. Everything's fatter in 64-bit-land.
For information about large-memory support in Windows, see this article at support.microsoft.com. Also consult this post at sinewalker.
Sunday, November 02, 2008
Java 1.4.2 joins the undead
Java 1.4.2 died last week. According to Sun's "End of Service Life" page, Java 1.4.2 went EOSL last Thursday. The only trouble is, it's still moving.
Java 5 (SE) was released in 2004 and Java 6 has been out since 2006. Java 5 will, in fact, also be at EOSL in less than a year. (You might call it the Java "Dead Man Walking" Edition.) And yet, if you do a Google search on any of the following, guess what you get?
java.lang.Object
java.lang.Class
java.lang.Exception
java.lang.Throwable
java.lang.Runtime
java.awt.Image
java.io.File
java.net.URL
JComponent
JFrame
If you do a Google search on any one of these, the very first hit (in every case) is a link to Sun's Javadoc for the 1.4.2 version of the object in question.
A year from now (when Java 5 hits the dirt) I wonder how many of these 10 searches will still take you to 1.4.2 Javadoc? (Remember, Java 5 has been out for almost 5 years and still doesn't outrank 1.4.2 in Google searches.) I'm guessing half of them. What do you think?
Java 5 (SE) was released in 2004 and Java 6 has been out since 2006. Java 5 will, in fact, also be at EOSL in less than a year. (You might call it the Java "Dead Man Walking" Edition.) And yet, if you do a Google search on any of the following, guess what you get?
java.lang.Object
java.lang.Class
java.lang.Exception
java.lang.Throwable
java.lang.Runtime
java.awt.Image
java.io.File
java.net.URL
JComponent
JFrame
If you do a Google search on any one of these, the very first hit (in every case) is a link to Sun's Javadoc for the 1.4.2 version of the object in question.
A year from now (when Java 5 hits the dirt) I wonder how many of these 10 searches will still take you to 1.4.2 Javadoc? (Remember, Java 5 has been out for almost 5 years and still doesn't outrank 1.4.2 in Google searches.) I'm guessing half of them. What do you think?
Thursday, October 30, 2008
What's the strangest thing in Java?
There's an interesting discussion going on at TheServerSide.com right now. Someone asked "What’s the strangest thing about the Java platform?"
I can think of a lot of strange things about Java (space precludes a full enumeration here). Offhand, I'd say one of the more disturbing aspects of Java is its ill-behaved (unpredictable) System.gc( ) method.
According to Sun, System.gc( ) is not 100% reliable: "When control returns from the method call, the virtual machine has made its best effort to recycle all discarded objects." Notice the wording ("best effort"). There is absolutely no guarantee that gc() will actually force a garbage collection. This is well known to anybody who has actually tried to use it in anger.
The problem is, in the rare case when you actually do need to use gc(), you really do need it to work (or at least behave in a well-understood, deterministic way). Otherwise you can't make any serious use of it in a mission-critical application. Not to put too fine a point on it, but: If a method is not guaranteed to do what you expect it to do, then it seems to me the method becomes quite dangerous. I don't know about you, but I rely on System calls to work. If you can't rely on a System call, what can you rely on?
Suppose you've written a reentry program for a spacecraft, and you have an absolute need for a particular routine (e.g., to fire retro-rockets) to execute, without interruption, starting at a particular point in time. The spacecraft will be lost and the mission will fail (at a cost to taxpayers of $300 million) if the retro-rockets don't fire on time or don't shut off on time.
Now imagine that just as your program's fireRetroRockets() method is entered, the JVM decides to "stop the world" and do a garbage-collect.
Houston, we have a . . . well, you know.
The point is, if you could call System.gc( ) ahead of time, and count on it doing exactly what you want it to do (collect garbage immediately, so that an uncommanded GC won't happen at the wrong moment), you could save the mission. (Arguably.)
Obviously, this example is somewhat academic. No one in his right mind would actually use Java to program a spacecraft, in real life.
And that, I think, says a great deal about the Java platform.
I can think of a lot of strange things about Java (space precludes a full enumeration here). Offhand, I'd say one of the more disturbing aspects of Java is its ill-behaved (unpredictable) System.gc( ) method.
According to Sun, System.gc( ) is not 100% reliable: "When control returns from the method call, the virtual machine has made its best effort to recycle all discarded objects." Notice the wording ("best effort"). There is absolutely no guarantee that gc() will actually force a garbage collection. This is well known to anybody who has actually tried to use it in anger.
The problem is, in the rare case when you actually do need to use gc(), you really do need it to work (or at least behave in a well-understood, deterministic way). Otherwise you can't make any serious use of it in a mission-critical application. Not to put too fine a point on it, but: If a method is not guaranteed to do what you expect it to do, then it seems to me the method becomes quite dangerous. I don't know about you, but I rely on System calls to work. If you can't rely on a System call, what can you rely on?
Suppose you've written a reentry program for a spacecraft, and you have an absolute need for a particular routine (e.g., to fire retro-rockets) to execute, without interruption, starting at a particular point in time. The spacecraft will be lost and the mission will fail (at a cost to taxpayers of $300 million) if the retro-rockets don't fire on time or don't shut off on time.
Now imagine that just as your program's fireRetroRockets() method is entered, the JVM decides to "stop the world" and do a garbage-collect.
Houston, we have a . . . well, you know.
The point is, if you could call System.gc( ) ahead of time, and count on it doing exactly what you want it to do (collect garbage immediately, so that an uncommanded GC won't happen at the wrong moment), you could save the mission. (Arguably.)
Obviously, this example is somewhat academic. No one in his right mind would actually use Java to program a spacecraft, in real life.
And that, I think, says a great deal about the Java platform.
Wednesday, October 29, 2008
Chaos in query-land
I wrote a micro-rant the other day at CMSWatch.com on the need for an industry-standard syntax for plain-language keyword search. I, for one, am tired of learning a different search syntax for every site I go to. I find myself naively assuming (like an idiot) that every search engine obeys Google syntax. Not true, of course. It's a free-for-all out there. For example, not every search engine "ANDs" keywords together by default. Even at this simple level (a two-keyword search!) users are blindsided by products that behave unpredictably.
At any rate, Lars Trieloff pointed out to me yesterday that Apache Jackrabbit (the Java Content Repository reference implementation, which underpins Apache Sling) implements something called GQL, which is colloquially understood to mean Google Query Language, although in fact it means GQL. It does not implement Google's actual search syntax in comprehensive detail. It merely allows Jackrabbit to support plaintext queries in a Google-like way, so that if you are one of those people (like me) who automatically assumes that any given search widget will honor Google grammar, you won't be disappointed.
It turns out, the source code for GQL.java is remarkably compact, because really it's just a thin linguistic facade over an underlying XPath query facility. GQL.java does nothing more than transliterate your query into XPath. It's pretty neat, though.
I'm all for something like GQL becoming, say, an IETF RFC, so that vendors and web sites can begin implementing (and advertising) support for Google-like syntax. First there will need to be a name change, though. Google already uses "GQL" to describe a SQL-like language used in the Google App Engine. There's also a Graphical Query Language that has nothing to do with Jackrabbit nor Google.
See what I mean? It's chaos out there in query-land.
At any rate, Lars Trieloff pointed out to me yesterday that Apache Jackrabbit (the Java Content Repository reference implementation, which underpins Apache Sling) implements something called GQL, which is colloquially understood to mean Google Query Language, although in fact it means GQL. It does not implement Google's actual search syntax in comprehensive detail. It merely allows Jackrabbit to support plaintext queries in a Google-like way, so that if you are one of those people (like me) who automatically assumes that any given search widget will honor Google grammar, you won't be disappointed.
It turns out, the source code for GQL.java is remarkably compact, because really it's just a thin linguistic facade over an underlying XPath query facility. GQL.java does nothing more than transliterate your query into XPath. It's pretty neat, though.
I'm all for something like GQL becoming, say, an IETF RFC, so that vendors and web sites can begin implementing (and advertising) support for Google-like syntax. First there will need to be a name change, though. Google already uses "GQL" to describe a SQL-like language used in the Google App Engine. There's also a Graphical Query Language that has nothing to do with Jackrabbit nor Google.
See what I mean? It's chaos out there in query-land.
Tuesday, October 28, 2008
Pixel Bender plug-in for Photoshop

When I first heard about Adobe's Pixel Bender technology, I became very excited. An ActionScript-based pixel shader API? What could be more fun that than? (By now you know what my social life must be like.)
When I saw that PB was a Flash-only technology, my enthusiasm got tamped down a bit. Later, I learned that PB would be supported in After Effects, which had me scratching my chin again. (I've written AE plug-ins before. It's much less punishing than writing Photoshop plug-ins.)
Now it turns out there will be a Pixel Bender plug-in for the next version of Photoshop. According to Adobe's John Nack, "Pixel Bender won't be supported in the box in the next version of Photoshop, but we plan to offer a PB plug-in as a free download when CS4 ships. Therefore it's effectively part of the release."
This is great news for those of us who like to peek and poke pixels but can't be bothered to use the Byzantine C++ based Photoshop SDK.
In case you're wondering what you can do with Pixel Bender, some nice sample images and scripts can be found here. The image shown above was created with this 60-line script.
Nice.
Monday, October 27, 2008
Java 7 gets "New" New I/O package
I've always hated Java I/O with all its convoluted, Rube-Goldbergish special classes with special knowledge of special systems, and the legacy readLine( ) type of garbage that brings back so many bad memories of the Carter years.
With JSR 203 (to be implemented in Java SE 7), we get a new set of future legacy methods. This is Sun's third major attempt in 13 years to get I/O right. And from what I've seen, it doesn't look good. (Examples here.) My main question at this point is where they got that much lipstick.
The main innovation is the new Path object, which seems to be a very slightly more abstract version of File. (This is progress?) You would think any new I/O library these days would make heavy use of URIs, URLs, and Schemes (file:, http:, etc.) and lessons learned in the realization of concepts like REST, AJAX, and dependency injection. No such luck. Instead we have exotic new calls like
When I want to do I/O, I want to be able to do something like
dataSrc = new DataGetter( );
dataSrc.setPref( DataGetter.EIGHTBITBYTES );
dataSrc.setPref( DataGetter.SLURPALL );
data = dataSrc.getData( uri );
and be done with it. (And by the way, let me pass a string for the URI, if I want to. Don't make me create a special object.)
I don't want to have to know about newlines, buffering, or file-system obscurata, unless those things are terribly important to me, in which case I want to be able to inject dependencies at will. But don't make me instantiate totally different object types for buffered vs. non-buffered streams, and all the rest. Don't give me a million flavors of special objects. Just let me pass hints into the DataGetter, and let the DataGetter magically grok what I'm trying to do (by making educated guesses, if need be). If I want a special kind of buffering, filtering, encoding, error-handling, etc., let me craft the right cruftball of flags and constants, and I'll pass them to the DataGetter. Otherwise, there should be reasonable defaults for every situation.
I would like a file I/O library that is abstract enough to let me read one bit at a time, if I want; or 6 bits at a time; or 1024 bits, etc. To me, bits are bits. I should be able to hand parse them if I want, in the exact quantities that I want. If I'm doing some special type of data compression and I need to write 13 bits to output, then 3 bits, then 12, then 10, and so on, I should be able to do that with ease and elegance. I shouldn't have to stand on my head or instantiate exotic objects for reading, buffering, filtering, or anything else.
I could write a long series of articles on what's wrong with Java I/O. But I don't look forward to revising that article every few years as each "new" I/O package comes out. Like GUI libraries and 2D graphics, this is something Sun's probably never going to get right. It's an area that begs for intervention by fresh talent, young programmers who are self-taught (not infected by orthodoxies acquired in college courses) and have no understanding at all of legacy file systems, kids whose idea of I/O is HTTP GET. Until people with "beginner's mind" get involved, there's no hope of making Java I/O right.
With JSR 203 (to be implemented in Java SE 7), we get a new set of future legacy methods. This is Sun's third major attempt in 13 years to get I/O right. And from what I've seen, it doesn't look good. (Examples here.) My main question at this point is where they got that much lipstick.
The main innovation is the new Path object, which seems to be a very slightly more abstract version of File. (This is progress?) You would think any new I/O library these days would make heavy use of URIs, URLs, and Schemes (file:, http:, etc.) and lessons learned in the realization of concepts like REST, AJAX, and dependency injection. No such luck. Instead we have exotic new calls like
FileSystem.getRootDirectories()and DirectoryEntry.newSeekableByteChannel(). It's like we've learned nothing at all in the last 20 years.When I want to do I/O, I want to be able to do something like
dataSrc = new DataGetter( );
dataSrc.setPref( DataGetter.EIGHTBITBYTES );
dataSrc.setPref( DataGetter.SLURPALL );
data = dataSrc.getData( uri );
and be done with it. (And by the way, let me pass a string for the URI, if I want to. Don't make me create a special object.)
I don't want to have to know about newlines, buffering, or file-system obscurata, unless those things are terribly important to me, in which case I want to be able to inject dependencies at will. But don't make me instantiate totally different object types for buffered vs. non-buffered streams, and all the rest. Don't give me a million flavors of special objects. Just let me pass hints into the DataGetter, and let the DataGetter magically grok what I'm trying to do (by making educated guesses, if need be). If I want a special kind of buffering, filtering, encoding, error-handling, etc., let me craft the right cruftball of flags and constants, and I'll pass them to the DataGetter. Otherwise, there should be reasonable defaults for every situation.
I would like a file I/O library that is abstract enough to let me read one bit at a time, if I want; or 6 bits at a time; or 1024 bits, etc. To me, bits are bits. I should be able to hand parse them if I want, in the exact quantities that I want. If I'm doing some special type of data compression and I need to write 13 bits to output, then 3 bits, then 12, then 10, and so on, I should be able to do that with ease and elegance. I shouldn't have to stand on my head or instantiate exotic objects for reading, buffering, filtering, or anything else.
I could write a long series of articles on what's wrong with Java I/O. But I don't look forward to revising that article every few years as each "new" I/O package comes out. Like GUI libraries and 2D graphics, this is something Sun's probably never going to get right. It's an area that begs for intervention by fresh talent, young programmers who are self-taught (not infected by orthodoxies acquired in college courses) and have no understanding at all of legacy file systems, kids whose idea of I/O is HTTP GET. Until people with "beginner's mind" get involved, there's no hope of making Java I/O right.
Friday, October 24, 2008
Enterprise Software Feared Overpriced
I'm being sardonic with that headline, obviously, but I have to agree with Tim Bray, who said in passing the other day: "I just don’t believe that Enterprise Software, as currently priced, has much future, in the near term anyhow."
I take this to mean that the days of the seven-figure software deal (involving IBM, Oracle, EMC, Open Text, etc.) may not exactly be over, but certainly those kinds of sales are going to be vanishingly rare, going forward.
I would take Bray's statement a step further, though. He's speaking to the high cost of enterprise software itself (or at least that's how I interpret his statement). Enterprise systems take a lot of manpower to build and maintain. The budget for a new system rollout tends to break out in such a way that the software itself represents only 10 to 50 percent of the overall cost. In other words, software cost is a relatively minor factor.
Therefore I would extend Bray's comment to say that old-school big-budget Enterprise Software projects involving a cast of thousands, 12 months of development and testing, seven-figure software+services deals, etc., are on the way out. In its place? Existing systems! Legacy systems will be maintained, modified, built out as necessary (and only as necessary) using agile methodologies, high-productivity tools and languages (i.e., scripting), RESTful APIs, and things that make economic sense.
There's no room any more for technologies and systems that aren't provably (and majorly) cost-effective. IBM, Oracle, EMC, listen up: Million-dollar white elephants are on the endangered species list.
I take this to mean that the days of the seven-figure software deal (involving IBM, Oracle, EMC, Open Text, etc.) may not exactly be over, but certainly those kinds of sales are going to be vanishingly rare, going forward.
I would take Bray's statement a step further, though. He's speaking to the high cost of enterprise software itself (or at least that's how I interpret his statement). Enterprise systems take a lot of manpower to build and maintain. The budget for a new system rollout tends to break out in such a way that the software itself represents only 10 to 50 percent of the overall cost. In other words, software cost is a relatively minor factor.
Therefore I would extend Bray's comment to say that old-school big-budget Enterprise Software projects involving a cast of thousands, 12 months of development and testing, seven-figure software+services deals, etc., are on the way out. In its place? Existing systems! Legacy systems will be maintained, modified, built out as necessary (and only as necessary) using agile methodologies, high-productivity tools and languages (i.e., scripting), RESTful APIs, and things that make economic sense.
There's no room any more for technologies and systems that aren't provably (and majorly) cost-effective. IBM, Oracle, EMC, listen up: Million-dollar white elephants are on the endangered species list.
Wednesday, October 22, 2008
Flash-drive RAID
I stumbled upon the floppy-drive RAID story (see previous blog) as part of a Google search to see if any such thing as a memory stick (Flash-drive) RAID array is available for Vista. No such luck, of course. But there are quite a few blogs and articles on the Web by Linux users who have successfully created ad-hoc Flash RAIDs from commodity USB hubs and memory sticks. (I recommend this June 2008 article from the Linux Gazette and this even more entertaining, not to mention better-illustrated, piece by Daddy Kewl. Definitely do not fail to read the latter!) Linux supports this kind of madness natively.
MacOS is even better for this. Evidently you can plug two sticks into a PowerBook's USB ports and configure them as a RAID array with native MacOS dialogs. (Details here.) How I envy Mac users!
MacOS is even better for this. Evidently you can plug two sticks into a PowerBook's USB ports and configure them as a RAID array with native MacOS dialogs. (Details here.) How I envy Mac users!
Tuesday, October 21, 2008
Floppy-disk RAID array

This has got to be the funniest thing I've seen all year. And trust me, this has been a funny year.
Daniel Blade Olson, a man after my own heart (even if that phrase doesn't translate well into foreign languages...), has rigged a bunch of floppy drives to form a RAID array. His disturbing writeup is here.
Saturday, October 18, 2008
Fast pixel-averaging
I don't know why it took me so long to realize that there's an easy, fast way to obtain the average of two RGB pixel values. (An RGB pixel is commonly represented as a 32-bit integer. Let's assume the top 4 bits aren't used.)
To ensure proper averaging of red, green, and blue components of two pixels requires parsing those 8-bit values out of each pixel and adding them together, then dividing by two, and crafting a new pixel out of the new red, green, and blue values. Or at least that's the naive way of doing things. In code (I'll show it in JavaScript, but it looks much the same in C or Java):
That's a lot of code to average two 32-bit values, but remember that red, green, and blue values (8 bits each) have to live in their own swim lanes. You can't allow overflow.
Here's the much cleaner, less obvious, hugely faster way:
The key intuition here is that you want to clear the bottom bit of the red and green channels in order to make room for overflow from the green and blue "adds."
Of course, in the real world, you would inline this code rather than use it as a function. (In a loop that's processing 800 x 600 pixels you surely don't want to call a function hundreds of thousands of times.)
Similar mask-based techniques can be used for adding and subtracting pixel values. Overflow is handled differently, though (left as an exercise for the reader).
To ensure proper averaging of red, green, and blue components of two pixels requires parsing those 8-bit values out of each pixel and adding them together, then dividing by two, and crafting a new pixel out of the new red, green, and blue values. Or at least that's the naive way of doing things. In code (I'll show it in JavaScript, but it looks much the same in C or Java):
// The horribly inefficient naive way:
function average( a,b ) {
var REDMASK = 0x00ff0000;
var GREENMASK = 0x0000ff00;
var BLUEMASK = 0x000000ff;
var aRed = a & REDMASK;
var aGreen = a & GREENMASK;
var aBlue = a & BLUEMASK;
var bRed = b & REDMASK;
var bGreen = b & GREENMASK;
var bBlue = b & BLUEMASK;
var aveRed = (aRed + bRed) >> 1;
var aveGreen = (aGreen + bGreen) >> 1;
var aveBlue = (aBlue + bBlue) >> 1;
return aveRed | aveGreen | aveBlue;
}
That's a lot of code to average two 32-bit values, but remember that red, green, and blue values (8 bits each) have to live in their own swim lanes. You can't allow overflow.
Here's the much cleaner, less obvious, hugely faster way:
// the fast way:
MASK7BITS = 0x00fefeff;
function ave( a,b ) {
a &= MASK7BITS;
b &= MASK7BITS;
return (a+b)>>1;
}
The key intuition here is that you want to clear the bottom bit of the red and green channels in order to make room for overflow from the green and blue "adds."
Of course, in the real world, you would inline this code rather than use it as a function. (In a loop that's processing 800 x 600 pixels you surely don't want to call a function hundreds of thousands of times.)
Similar mask-based techniques can be used for adding and subtracting pixel values. Overflow is handled differently, though (left as an exercise for the reader).
Friday, October 17, 2008
Loading an iframe programmatically
This is a nasty hack. It's so useful, though. So useful.
Suppose you want to insert a new page (a new window object and DOM document) into your existing page. Not a new XML fragment or subtree on your current page; I'm talking about a whole new page within a page. An iframe, in other words.
The usual drill is to create an <iframe> node using document.createElement( ) and attach it to the current page somewhere. But suppose you want to populate the iframe programmatically. The usual technique is to start building DOM nodes off the iframe's contentDocument node using DOM methods. Okay, that's fine, but it's a lot of drudgery. (I'm sweating already.) At some point you're probably going to start assigning string values to body.innerHTML (or whatever). But then you're into markup-stringification hell. (Is there a JavaScript programmer among us who hasn't frittered away major portions of his or her waking life escaping quotation marks and dealing with line-continuation-after-line-continuation in order to stringify some hellish construction, whether it's a piece of markup or an argument to RegExp( ) or whatever?)
Well. All of that is best left to Internet Explorer programmers. If you're a Mozilla user, you can use E4X as your "get out of stringification-jail FREE" card, and you can use a data URL to load your iframe without passing through DOM hell.
Suppose you want your iframe to contain a small form. First, declare it as an XML literal (which you can do as follows, using E4X):
Now create an iframe to hold it:
Now (the fun part...) you just need to populate the iframe, which you can do in one of two ways. You can attach the iframe node to the top.document, then assign myPage.toXMLString() to iframe.contentDocument.body, or (much more fun) you can convert myPage to a data URL and then set the iframe's src attribute to that URL:
A shameless hack, as I say. It works fine in Firefox, though, even with very large data URLs. I don't recall the exact size limit on data URLs in Mozilla, but I seem to remember that it's megabytes. MSIE, of course, has some wimpy limit like 4096 characters (maybe it's changed in IE8?).
In my opinion, all browsers SHOULD support unlimited-length data URLs, just like they SHOULD support E4X and MUST support JavaScript. Notwithstanding any of this, Microsoft MAY go to hell.
Suppose you want to insert a new page (a new window object and DOM document) into your existing page. Not a new XML fragment or subtree on your current page; I'm talking about a whole new page within a page. An iframe, in other words.
The usual drill is to create an <iframe> node using document.createElement( ) and attach it to the current page somewhere. But suppose you want to populate the iframe programmatically. The usual technique is to start building DOM nodes off the iframe's contentDocument node using DOM methods. Okay, that's fine, but it's a lot of drudgery. (I'm sweating already.) At some point you're probably going to start assigning string values to body.innerHTML (or whatever). But then you're into markup-stringification hell. (Is there a JavaScript programmer among us who hasn't frittered away major portions of his or her waking life escaping quotation marks and dealing with line-continuation-after-line-continuation in order to stringify some hellish construction, whether it's a piece of markup or an argument to RegExp( ) or whatever?)
Well. All of that is best left to Internet Explorer programmers. If you're a Mozilla user, you can use E4X as your "get out of stringification-jail FREE" card, and you can use a data URL to load your iframe without passing through DOM hell.
Suppose you want your iframe to contain a small form. First, declare it as an XML literal (which you can do as follows, using E4X):
myPage = <html>
<body>
<form action="">
... a bunch of markup here
</form>
</body>
</html>;
Now create an iframe to hold it:
iframe = top.document.createElement( "iframe" );
Now (the fun part...) you just need to populate the iframe, which you can do in one of two ways. You can attach the iframe node to the top.document, then assign myPage.toXMLString() to iframe.contentDocument.body, or (much more fun) you can convert myPage to a data URL and then set the iframe's src attribute to that URL:
// convert XML object to data URL
function xmlToDataURL( theXML ) {
var preamble = "data:text/html;charset=utf-8,";
var octetString = escape( theXML.toXMLString( ) );
return preamble + octetString;
}
dataURL = xmlToDataURL( myPage );
iframe.setAttribute( "src", dataURL ); // load frame
// attach the iframe to your current page
top.document.body.insertBefore( iframe ,
top.document.body.firstChild );
A shameless hack, as I say. It works fine in Firefox, though, even with very large data URLs. I don't recall the exact size limit on data URLs in Mozilla, but I seem to remember that it's megabytes. MSIE, of course, has some wimpy limit like 4096 characters (maybe it's changed in IE8?).
In my opinion, all browsers SHOULD support unlimited-length data URLs, just like they SHOULD support E4X and MUST support JavaScript. Notwithstanding any of this, Microsoft MAY go to hell.
Saturday, October 11, 2008
Russians use graphics card to break WiFi encryption
The same Russians who got in a lot of trouble a few years ago for selling a small program that removes password protection from locked PDF files (I'm talking about the guys at Elcomsoft) are at it again. It seems this time they've used an NVidia graphics card GPU to crack WiFi WPA2 encryption.
They used the graphics card, of course, for sheer number-crunching horsepower. The GeForce 8800 GTX delivers something like 300 gigaflops of crunch, which I find astonishing (yet believable). Until now, I had thought that the most powerful chipset in common household use was the Cell 8-core unit used in the Sony Playstation 3 (which weighs in at 50 to 100 gigaflops). Only 6 of the PS/3's processing units are available to programmers, though, and the Cell architecture is meant for floating-point operations, so for all I know the GeForce 8800 (or its relatives) might be the way to go if you need blazing-fast integer math.
Even so, it would be interesting to know what you could do with, say, an 8-box cluster of overclocked PS/3s. Simulate protein-ribosome interactions on an atom-by-atom basis, perhaps?
They used the graphics card, of course, for sheer number-crunching horsepower. The GeForce 8800 GTX delivers something like 300 gigaflops of crunch, which I find astonishing (yet believable). Until now, I had thought that the most powerful chipset in common household use was the Cell 8-core unit used in the Sony Playstation 3 (which weighs in at 50 to 100 gigaflops). Only 6 of the PS/3's processing units are available to programmers, though, and the Cell architecture is meant for floating-point operations, so for all I know the GeForce 8800 (or its relatives) might be the way to go if you need blazing-fast integer math.
Even so, it would be interesting to know what you could do with, say, an 8-box cluster of overclocked PS/3s. Simulate protein-ribosome interactions on an atom-by-atom basis, perhaps?
Decimal to Hex in JavaScript
There's an easy way to get from decimal to hexadecimal in JavaScript:
The string you get back may not look the way you want, though. For example, toHex(256) gives "100", when you're probably wanting "0x0100" or "0x00000100". What you need is front-padding. Just the right amount of front-padding.
Of course, you should ensure that 'dfl' is not smaller than string.length, to prevent a RangeError when allocating the array.
If you're wondering why "++dfl" instead of plain "dfl", stop now to meditate. Or run the code until enlightenment occurs.
At this point you can do:
If you later need to use this value as a number, no problem. You can apply any numeric operation except addition on it with perfect safety. Addition will be treated as string concatenation whenever any operand is a string (that's the standard JS intepreter behavior), so if you need to do "0x00000100" + 4, you have to cast the hex-string to a number.
function toHex( n ) { return n.toString( 16 ); }The string you get back may not look the way you want, though. For example, toHex(256) gives "100", when you're probably wanting "0x0100" or "0x00000100". What you need is front-padding. Just the right amount of front-padding.
// add just the right number of 'ch' characters
// to the front of string to give a new string of
// the desired final length 'dfl'
function frontPad( string, ch, dfl ) {
var array = new Array( ++dfl - string.length );
return array.join( ch ) + string;
}
Of course, you should ensure that 'dfl' is not smaller than string.length, to prevent a RangeError when allocating the array.
If you're wondering why "++dfl" instead of plain "dfl", stop now to meditate. Or run the code until enlightenment occurs.
At this point you can do:
function toHex( n ) {
return "0x" + frontPad( n.toString( 16 ), 0, 8);
}
toHex( 256 ) // gives "0x00000100" If you later need to use this value as a number, no problem. You can apply any numeric operation except addition on it with perfect safety. Addition will be treated as string concatenation whenever any operand is a string (that's the standard JS intepreter behavior), so if you need to do "0x00000100" + 4, you have to cast the hex-string to a number.
n = toHex( 256 ); // "0x00000100"
typeof n // "string"
isNaN( n ) // false
x = n * n; // 65536
x = n + 256 // "0x00000100256"
x = Number( n ) + 256 // 512
Wednesday, October 08, 2008
$20 touchscreen, anyone?
Touchless is one of those ideas that's so obvious, yet so cool, that after you hear it, you wonder why someone (such as yourself) didn't think of it ages ago. Aim a webcam at your screen; have software that follows your fingers around; move things around in screen space in response to your finger movements. Voila! Instant touch-screen on the cheap.
Mike Wasserman came up with Touchless as a college project while attending Columbia University. He's now with Microsoft. The source code is free.
Awesome.
Mike Wasserman came up with Touchless as a college project while attending Columbia University. He's now with Microsoft. The source code is free.
Awesome.
Saturday, October 04, 2008
Accidental assignment
People sometimes look at my JavaScript and wonder why there is so much "backwards" notation:
And so on, instead of putting the null or the zero on the right side of the '==' the way everyone else does.
The answer is, I'm a very fast typist and it's not uncommon for me to type "s" when I meant to type "ss," or "4" when I meant to type "44," or "=" when I meant to type "==".
In JavaScript, if I write the if-clause in the normal (not backwards) way, and I mistakenly type "=" for "==", like so...
... then of course I'm going to destroy the contents of the array (because in JavaScript, you can wipe out an array by setting its length to zero) and my application is going to behave strangely or throw an exception somewhere down the line.
This general type of programmer error is what I call "accidental assignment." Note that I refer to it as a programmer error. It is not a syntactical error. The interpreter will be only too happy to assign a value to a variable inside an if-clause, if you tell it to. And it may be quite some time before you are able to locate the "bug" in your program, because at runtime the interpreter will dutifully execute your code without putting messages in the console. If an exception is eventually thrown, it could be in an operation that's a thousand lines of code away from your syntactical blunder.
So the answer is quite simple. If you write the if-clause "backwards," with zero on the left, an accidental assignment will be caught right away by the interpreter, and the resulting console message will tell you the exact line number of the offending code, because you can't assign a value to zero (or to null, or to any other baked-in constant).
In an expression like "null == x" we say that null is not Lvaluable. The terms "l-value" and "r-value" originally meant left-hand value and right-hand value. But when Kernighan and Ritchie created C, the meaning changed, to become more precise. Today an Lvalue is understood to be a locatable value, something that has an address in memory. A compiler will allocate an address for each named variable at compile-time. The value stored in this address (its r-value) is generally not known until runtime. It's impossible, in any case, to refer to an r-value by its address if it hasn't been assigned to an l-value, hence the compiler won't even try to do so and you'll get an error if you try to compile "null = x".
On the other hand, "x = null" is perfectly legal, and in K&R days a C-compiler would obediently compile such a statement whether it was in an if-clause or not. This actually resulted in some horrendously costly errors in the real world, and as a result, today no modern compiler will accept a bare assignment inside an if-clause. (Actually I can think of an exception. But let's save that for another time.) If you really mean to do an assignment inside an if, you must encapsulate it in parentheses.
Not so with JavaScript, a language that (like K&R C) assumes that the programmer knows what he or she is doing. People unwittingly create accidental assignments inside if-clauses all the time. It's not a syntactical error, so the interpreter doesn't complain. Meanwhile you've got a very difficult situation to debug, and the language itself gets blamed. (A poor craftsman always blames his tools.)
As a defensive programming technique, I always put the non-Lvaluable operand on the left side of an equality operator, and that way if I make a typing mistake, the interpreter slaps me in the face at the earliest opportunity rather than spitting in my general direction some time later. It's a defensive programming tactic that has served me well. I'm surprised more people don't do it.
if ( null == arguments[ 0 ] )
return "Nothing to do";
if ( 0 == array.length )
break;
And so on, instead of putting the null or the zero on the right side of the '==' the way everyone else does.
The answer is, I'm a very fast typist and it's not uncommon for me to type "s" when I meant to type "ss," or "4" when I meant to type "44," or "=" when I meant to type "==".
In JavaScript, if I write the if-clause in the normal (not backwards) way, and I mistakenly type "=" for "==", like so...
if ( array.length = 0 )
break;
... then of course I'm going to destroy the contents of the array (because in JavaScript, you can wipe out an array by setting its length to zero) and my application is going to behave strangely or throw an exception somewhere down the line.
This general type of programmer error is what I call "accidental assignment." Note that I refer to it as a programmer error. It is not a syntactical error. The interpreter will be only too happy to assign a value to a variable inside an if-clause, if you tell it to. And it may be quite some time before you are able to locate the "bug" in your program, because at runtime the interpreter will dutifully execute your code without putting messages in the console. If an exception is eventually thrown, it could be in an operation that's a thousand lines of code away from your syntactical blunder.
So the answer is quite simple. If you write the if-clause "backwards," with zero on the left, an accidental assignment will be caught right away by the interpreter, and the resulting console message will tell you the exact line number of the offending code, because you can't assign a value to zero (or to null, or to any other baked-in constant).
In an expression like "null == x" we say that null is not Lvaluable. The terms "l-value" and "r-value" originally meant left-hand value and right-hand value. But when Kernighan and Ritchie created C, the meaning changed, to become more precise. Today an Lvalue is understood to be a locatable value, something that has an address in memory. A compiler will allocate an address for each named variable at compile-time. The value stored in this address (its r-value) is generally not known until runtime. It's impossible, in any case, to refer to an r-value by its address if it hasn't been assigned to an l-value, hence the compiler won't even try to do so and you'll get an error if you try to compile "null = x".
On the other hand, "x = null" is perfectly legal, and in K&R days a C-compiler would obediently compile such a statement whether it was in an if-clause or not. This actually resulted in some horrendously costly errors in the real world, and as a result, today no modern compiler will accept a bare assignment inside an if-clause. (Actually I can think of an exception. But let's save that for another time.) If you really mean to do an assignment inside an if, you must encapsulate it in parentheses.
Not so with JavaScript, a language that (like K&R C) assumes that the programmer knows what he or she is doing. People unwittingly create accidental assignments inside if-clauses all the time. It's not a syntactical error, so the interpreter doesn't complain. Meanwhile you've got a very difficult situation to debug, and the language itself gets blamed. (A poor craftsman always blames his tools.)
As a defensive programming technique, I always put the non-Lvaluable operand on the left side of an equality operator, and that way if I make a typing mistake, the interpreter slaps me in the face at the earliest opportunity rather than spitting in my general direction some time later. It's a defensive programming tactic that has served me well. I'm surprised more people don't do it.
Thursday, October 02, 2008
Wednesday, October 01, 2008
Serialize any POJO to XML
Ever since Java 1.4.2 came out, I've been a big fan of java.beans.XMLEncoder, which lets you serialize runtime objects (including the values of instance variables, etc.) as XML, using just a few lines of code:
A favorite trick of mine is to serialize an application's key objects ahead of time, then JAR them up and instantiate them at runtime using XMLDecoder. With a Swing dialog, this eliminates a ton of repetitive container.add( someWidget) code, and similar Swing incantations (you know what I'm talking about). So it cleans up your code incredibly. It also makes Swing dialogs (and other objects) declarative in nature; they become static XML that you can edit separately from code, using XML tools. At runtime, of course, you can use DOM and other XML-manipulation technologies to tweak serialized objects before instantiating them. (Let your imagination run.)
As an aside: I am constantly shocked at how many of my Java-programming friends have never heard of this class.
If there's a down side to XMLEncoder, it's that it will only serialize Java beans, or so the documentation says, but actually the documentation is not quite right. (More on that in a moment.) With Swing objects, for example, XMLEncoder will serialize widgets but not any event handlers you've set on them. At runtime, you end up deserializing the Swing object, only to have to hand-decorate it with event handlers before it's usable in your application.
There's a solution for this, and again it's something relatively few Java programmers seem to know anything about. In a nutshell, the answer is to create your own custom persistence delegates. XMLEncoder will call the appropriate persistence delegate when it encounters an object in the XML graph that has a corresponding custom delegate.
This is (need I say?) exceptionally handy, because it provides a transparent, interception-based approach to controlling XMLEncoder's behavior, at a very fine level of control. If you have a Swing dialog that contains 8 different widget classes (some of them possibly containing multiple nested objects), many of which need special treatment at deserialization time, you can configure an XMLEncoder instance to serialize the whole dialog in just the fashion you need.
The nuts and bolts of this are explained in detail in this excellent article by Philip Milne. The article shows how to use custom persistence delegates to make XMLEncoder serialize almost any Java object, not just beans. Suffice it to say, you should read that article if you're as excited about XMLEncoder as I am.
This is an extraordinarily useful capability. You can create an elaborate Swing dialog (for example) containing dozens of nested widgets, then serialize the whole thing as a single XML file, capturing its state, using XMLEncoder (then deserialize it later, in another time and place, perhaps).
XMLEncoder e = new XMLEncoder(
new BufferedOutputStream(
new FileOutputStream("Test.xml")));
e.writeObject(new JButton("Hello, world"));
e.close();
A favorite trick of mine is to serialize an application's key objects ahead of time, then JAR them up and instantiate them at runtime using XMLDecoder. With a Swing dialog, this eliminates a ton of repetitive container.add( someWidget) code, and similar Swing incantations (you know what I'm talking about). So it cleans up your code incredibly. It also makes Swing dialogs (and other objects) declarative in nature; they become static XML that you can edit separately from code, using XML tools. At runtime, of course, you can use DOM and other XML-manipulation technologies to tweak serialized objects before instantiating them. (Let your imagination run.)
As an aside: I am constantly shocked at how many of my Java-programming friends have never heard of this class.
If there's a down side to XMLEncoder, it's that it will only serialize Java beans, or so the documentation says, but actually the documentation is not quite right. (More on that in a moment.) With Swing objects, for example, XMLEncoder will serialize widgets but not any event handlers you've set on them. At runtime, you end up deserializing the Swing object, only to have to hand-decorate it with event handlers before it's usable in your application.
There's a solution for this, and again it's something relatively few Java programmers seem to know anything about. In a nutshell, the answer is to create your own custom persistence delegates. XMLEncoder will call the appropriate persistence delegate when it encounters an object in the XML graph that has a corresponding custom delegate.
This is (need I say?) exceptionally handy, because it provides a transparent, interception-based approach to controlling XMLEncoder's behavior, at a very fine level of control. If you have a Swing dialog that contains 8 different widget classes (some of them possibly containing multiple nested objects), many of which need special treatment at deserialization time, you can configure an XMLEncoder instance to serialize the whole dialog in just the fashion you need.
The nuts and bolts of this are explained in detail in this excellent article by Philip Milne. The article shows how to use custom persistence delegates to make XMLEncoder serialize almost any Java object, not just beans. Suffice it to say, you should read that article if you're as excited about XMLEncoder as I am.
Monday, September 29, 2008
A number that's not equal to itself
All this time, I've been thinking NaN is not a number. What an idiot I've been.
In JavaScript:
And yet of course, NaN == NaN is false.
There you go. Amaze your friends.
In JavaScript:
typeof NaN == 'number' // true
And yet of course, NaN == NaN is false.
There you go. Amaze your friends.
Wednesday, September 24, 2008
Great hack: PNG-compressed text

I only recently stumbled across what's got to be the most outlandish scripting hack I've seen in a long time. Jacob Seidelin tells of how he managed to stuff text into a PNG image, then get it back out with the <canvas> getImageData( ) method. What's neat about it? Mainly the free compression you get with the PNG format. For example, when Jacob put the 124kb Prototype library into PNG format, it shrunk to 30kb. Of course, it makes for an awful-looking image (see above), which one might think of as a degenerate case of steganography, i.e. embedded data in an image, minus the image.
The trick doesn't work for all browsers, since you need canvas for it to work. And it's kind of pointless given that you can use gzip instead. But it's kind of neat in that it opens the door to browser steganography, embedding of private metadata, and potentially lots of other cool things.
Tuesday, September 23, 2008
JavaScript beautifiers suck
I keep looking for an online code beautifier that will convert my distinctly simian-looking Greasemonkey scripts to properly indented, formatted source code. My current favorite code editor (Notepad) doesn't provide proper code formatting. I know what you're thinking: Why aren't you using a proper IDE in the first place? Then you wouldn't have this problem! Well, first of all, I am thinking of upgrading to Wordpad. But it doesn't do formatting either. Second of all, I haven't found a JavaScript IDE worthy of the name, which is why I use Notepad. More on that in a minute.
I spent an hour the other day looking for an online beautifier that would do a makeover on my ugly JavaScript. What I found is that most people point either to this one or this one. (I tried others as well.) They either don't keep my existing newlines, or don't indent "if" blocks properly (or at all), and/or just plain don't indent consistently. Quite unacceptable.
Finally I gave up on the online schlockware and went straight to Flexbuilder (which has been sitting unused on my desktop), and I thought "Surely this will do the trick."
Imagine the look of abject horror on my face when I found that the ActionScript editor could not do the equivalent of Control-Shift-F (for Java in Eclipse). In fact, the formatter built into Flexbuilder's ActionScript editor won't even do auto-indenting: You have to manually grab blocks of code and do the old shift-right/shift-left indent/outdent thing by hand, over and over and over again, throughout your code, until the little beads of blood begin to form on your forehead.
I'm left, alas, with half-solutions. But unfortunately, two or three or ten half-solutions don't add up to a solution. (How fortunate we would all be if it did.)
I spent an hour the other day looking for an online beautifier that would do a makeover on my ugly JavaScript. What I found is that most people point either to this one or this one. (I tried others as well.) They either don't keep my existing newlines, or don't indent "if" blocks properly (or at all), and/or just plain don't indent consistently. Quite unacceptable.
Finally I gave up on the online schlockware and went straight to Flexbuilder (which has been sitting unused on my desktop), and I thought "Surely this will do the trick."
Imagine the look of abject horror on my face when I found that the ActionScript editor could not do the equivalent of Control-Shift-F (for Java in Eclipse). In fact, the formatter built into Flexbuilder's ActionScript editor won't even do auto-indenting: You have to manually grab blocks of code and do the old shift-right/shift-left indent/outdent thing by hand, over and over and over again, throughout your code, until the little beads of blood begin to form on your forehead.
I'm left, alas, with half-solutions. But unfortunately, two or three or ten half-solutions don't add up to a solution. (How fortunate we would all be if it did.)
Monday, September 22, 2008
Firebug on Vista giving problems
Is it just me or does anyone else find Firebug+FF3 on Vista to be flaky? It loses my console code if I switch tabs (not windows, just going to another tab and coming back). Sometimes the FB console stops working or won't execute "console.log( )". And it seems as though weird bugs show up in the Firefox console that don't show up in the Firebug log pane, and vice versa.
Also, I don't appreciate having to manually turn on the console for every web domain I go to. What a PITA. I wonder if that behavior can be disabled somehow? Right now, I'm feeling disabled.
Also, I don't appreciate having to manually turn on the console for every web domain I go to. What a PITA. I wonder if that behavior can be disabled somehow? Right now, I'm feeling disabled.
Thursday, September 18, 2008
JavaScript runs at C++ speed, if you let it
The common perception (ignorance of the crowd) is that JavaScript is slow. What I'm constantly finding, however, is that people will hand-craft a JavaScript loop to do, say, string parsing, when they could and should be using the language's built-in String methods (which always run fast).
Example: You need a "trim" function to remove leading and trailing whitespaces from user-entered text in a form. If you go out on the web and look at what people are doing in their scripts, you see a lot of things like:
I took this code verbatim from a web page in which the author of it claims (ironically) that it's an incredibly fast routine!
Compare with:
In testing, I found the shorter routine faster by 50% on very small strings with very few leading or trailing spaces, and faster by 300% or more on strings of length ~150 with ten to twenty leading or trailing spaces.
The better performance of the shorter function has nothing to do with it being shorter, of course. It has everything to do with the fact that the built-in JavaScript "replace( )" method (on the String pseudoclass) is implemented in C++ and runs at compiled-C speed.
This is an important point. Interpreters are written in C++ (Spidermonkey) or Java (Rhino). The built-in functions of the ECMAScript language are implemented in C++ in your browser. Harness that power! Use the built-in functions of the language. Never hand-parse strings with "indexOf" inside for-loops (etc.) when you can use native methods that run at compiled speed. Why walk if you can ride the bullet train?
The implications here for client/server web-app design are quite far-reaching. If you are using server-side JavaScript, and your server runtimes are Java-based, it means your server-side scripts are running (asymptotically, at least) at Java speed. Well-written client-side JavaScript runs (asymptotically) at C++ speed. Therefore, any script logic you can move to the client should be moved there. It's madness to waste precious server cycles.
Madness, I say.
Example: You need a "trim" function to remove leading and trailing whitespaces from user-entered text in a form. If you go out on the web and look at what people are doing in their scripts, you see a lot of things like:
function trim10 (str) {
var whitespace = ' \n\r\t\f\x0b\xa0\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200a\u200b\u2028\u2029\u3000';
for (var i = 0; i < str.length; i++) {
if (whitespace.indexOf(str.charAt(i)) === -1) {
str = str.substring(i);
break;
}
}
for (i = str.length - 1; i >= 0; i--) {
if (whitespace.indexOf(str.charAt(i)) === -1) {
str = str.substring(0, i + 1);
break;
}
}
return whitespace.indexOf(str.charAt(0)) === -1 ? str : '';
}
I took this code verbatim from a web page in which the author of it claims (ironically) that it's an incredibly fast routine!
Compare with:
function trim(a) {
return a.replace(/^ +/,"").replace(/ +$/,"");
}
In testing, I found the shorter routine faster by 50% on very small strings with very few leading or trailing spaces, and faster by 300% or more on strings of length ~150 with ten to twenty leading or trailing spaces.
The better performance of the shorter function has nothing to do with it being shorter, of course. It has everything to do with the fact that the built-in JavaScript "replace( )" method (on the String pseudoclass) is implemented in C++ and runs at compiled-C speed.
This is an important point. Interpreters are written in C++ (Spidermonkey) or Java (Rhino). The built-in functions of the ECMAScript language are implemented in C++ in your browser. Harness that power! Use the built-in functions of the language. Never hand-parse strings with "indexOf" inside for-loops (etc.) when you can use native methods that run at compiled speed. Why walk if you can ride the bullet train?
The implications here for client/server web-app design are quite far-reaching. If you are using server-side JavaScript, and your server runtimes are Java-based, it means your server-side scripts are running (asymptotically, at least) at Java speed. Well-written client-side JavaScript runs (asymptotically) at C++ speed. Therefore, any script logic you can move to the client should be moved there. It's madness to waste precious server cycles.
Madness, I say.
Wednesday, September 17, 2008
Getting Greasemonkey to work in Firefox3 on Vista
Wasn't happening for me until I started with a fresh (empty) FF3 user profile. Vista seems to be the problem in all of this. GM on FF3 on WinXP works fine, but with Vista, GM doesn't install properly unless you zero out your FF3 profile first. At least, that's the state of things today as I write this (17 Sept 2008). Hopefully it will get fixed soon. Until then ...
The procedure is:
1. In FF3, go to Organize Bookmarks and export your bookmarks as HTML so you don't foolishly lose them.
2. In the Vista "Start" panel, choose Run...
3. Launch Firefox with a command line of "firefox -profilemanager".
4. When the profile manager dialog appears, create a new profile.
5. When FF launches, install Greasemonkey.
6. Import your bookmarks.
7. Exit Firefox. Return to step 3. When profile manager dialog appears, delete your old profile. (Or else leave it and have to contend with logging in to one or the other profile whenever FF launches.)
Whew + sheesh.
The procedure is:
1. In FF3, go to Organize Bookmarks and export your bookmarks as HTML so you don't foolishly lose them.
2. In the Vista "Start" panel, choose Run...
3. Launch Firefox with a command line of "firefox -profilemanager".
4. When the profile manager dialog appears, create a new profile.
5. When FF launches, install Greasemonkey.
6. Import your bookmarks.
7. Exit Firefox. Return to step 3. When profile manager dialog appears, delete your old profile. (Or else leave it and have to contend with logging in to one or the other profile whenever FF launches.)
Whew + sheesh.
Wednesday, September 10, 2008
How to use JS 1.7 in Greasemonkey?
Problem: I need to be able to use the 'yield' keyword in a Greasemonkey script. This is a Javascript 1.7 language feature available in Firefox 2 and later. You must explicitly "turn on" support for this feature, however, by specifying
in the HTML page.
That's not what I need. I need to turn it on in Greasemonkey's execution context.
Others have run into this problem. It appears, however, that the Greasemonkey guys won't do anything about it.
I was hoping there'd be some clever back-door way to do this, but that seems unlikely. There appears, alas, to be no workaround, short of the usual (for Greasemonkey) expedient of vulturing the unsafeWindow, which is of course repulsive and unacceptable.
If anyone knows of a non-ugly solution to this problem (the problem of how to use 'yield' in Greasemonkey scripts), please advise.
<script type="application/javascript;version=1.7"/>in the HTML page.
That's not what I need. I need to turn it on in Greasemonkey's execution context.
Others have run into this problem. It appears, however, that the Greasemonkey guys won't do anything about it.
I was hoping there'd be some clever back-door way to do this, but that seems unlikely. There appears, alas, to be no workaround, short of the usual (for Greasemonkey) expedient of vulturing the unsafeWindow, which is of course repulsive and unacceptable.
If anyone knows of a non-ugly solution to this problem (the problem of how to use 'yield' in Greasemonkey scripts), please advise.
Tuesday, September 09, 2008
Selection object in Firefox
I've learned some interesting things about the way selections work in Mozilla.
Every window has a singleton selection object, even when the user has selected no items on the rendered page. Therefore, window.getSelection( ) always succeeds.
If you simply want user-selected text as a string, getSelection( ).toString( ) will work. But if you really intend to walk the selected DOM nodes, or process the selection in any non-trivial way, you will need access its Range objects with
window.getSelection( ).getRangeAt( i );
There is a "rangeCount" property on the Range object, so that you can know how many Ranges were selected by the user. In Firefox 2.0 and prior, the rangeCount was never more than one. But in Firefox 3, the user can do multi-selection of page contents. (Try it: Hold the Control key down as you swipe across various pieces of a page.) That means the range count can be more than one.
If you need to process a Range's contents, be sure to use the cloneContents( ) method, not the extractContents( ) method. The latter will actually remove nodes from the DOM tree, affecting the rendered page's appearance. (That is to say, content suddenly disappears!)
This is all spelled out at the Moz Developer Center page on Ranges.
Every window has a singleton selection object, even when the user has selected no items on the rendered page. Therefore, window.getSelection( ) always succeeds.
If you simply want user-selected text as a string, getSelection( ).toString( ) will work. But if you really intend to walk the selected DOM nodes, or process the selection in any non-trivial way, you will need access its Range objects with
window.getSelection( ).getRangeAt( i );
There is a "rangeCount" property on the Range object, so that you can know how many Ranges were selected by the user. In Firefox 2.0 and prior, the rangeCount was never more than one. But in Firefox 3, the user can do multi-selection of page contents. (Try it: Hold the Control key down as you swipe across various pieces of a page.) That means the range count can be more than one.
If you need to process a Range's contents, be sure to use the cloneContents( ) method, not the extractContents( ) method. The latter will actually remove nodes from the DOM tree, affecting the rendered page's appearance. (That is to say, content suddenly disappears!)
This is all spelled out at the Moz Developer Center page on Ranges.
Friday, September 05, 2008
XPath Query in Sling
I've been playing with Sling lately, and I was pleasantly surprised to find that Sling comes with a JSON query servlet that exposes SQL and XPath query capability through a RESTful HTTP GET syntax. (Thanks to Moritz Havelock for pointing this out.)
But I quickly ran into a small problem. (And just as quickly, the solution.) Allow me to explain.
The problem: I want to search for nodes in the repository that have a (multivalued) "pets" attribute containing the value "dog." Note that the "pets" attribute might have multiple values. I want to filter against just one. Therefore I can't do an equality test. I must use the XPath contains() function.
My test query was:
This produced an InvalidQueryException, with a message of "Unsupported function: contains (500)".
I was a bit surprised that the servlet seemed to know nothing about any contains() function. A true "WTF moment."
Taking my hint from the stack trace, I quickly ran a Google Code Search on org.apache.jackrabbit.core.query.xpath, and immediately found the answer in XPathQueryBuilder.java: It turns out you have to use the function's qualified name, jcr:contains(). Like so:
I'm so much of an XPath newb that I don't even know if I should have been surprised by this, but it did stymie me briefly. Anyway, it works now and I'm thrilled to be able to do XPath queries right from the GET-go.
But I quickly ran into a small problem. (And just as quickly, the solution.) Allow me to explain.
The problem: I want to search for nodes in the repository that have a (multivalued) "pets" attribute containing the value "dog." Note that the "pets" attribute might have multiple values. I want to filter against just one. Therefore I can't do an equality test. I must use the XPath contains() function.
My test query was:
http://localhost:7402/content.query.json?
queryType=xpath&statement=//*[contains(@pets,'dog')]This produced an InvalidQueryException, with a message of "Unsupported function: contains (500)".
I was a bit surprised that the servlet seemed to know nothing about any contains() function. A true "WTF moment."
Taking my hint from the stack trace, I quickly ran a Google Code Search on org.apache.jackrabbit.core.query.xpath, and immediately found the answer in XPathQueryBuilder.java: It turns out you have to use the function's qualified name, jcr:contains(). Like so:
http://localhost:7402/content.query.json?
queryType=xpath&statement=//*[jcr:contains(@pets,'dog')]I'm so much of an XPath newb that I don't even know if I should have been surprised by this, but it did stymie me briefly. Anyway, it works now and I'm thrilled to be able to do XPath queries right from the GET-go.
Tuesday, September 02, 2008
Google Chrome: nice console, ugly browser

I downloaded Chrome today and immediately started using the JavaScript console. It's pretty nice, but if you're already accustomed to Firebug in Firefox, it's no substitute. Also, what good is Chrome if you can't use Greasemonkey scripts with it?
The JS engine is presumably based on Spidermonkey (since the Chrome guys apparently used a lot of Mozilla code to slap this thing together). But they forgot to include E4X. And so help me, I haven't figured out how to enter a newline in the console without triggering an eval( ). In other words, I can only enter one line of code at a time, and then I have to execute it. As soon as I hit Enter, CR, Control-Enter, etc., the code on the current line executes. Oh well...
As a browser, this thing is not terribly impressive, from what I can tell.
In any case, Chrome itself strikes me as too fugly to deal with. I'm not sure which I'd rather do: spend a work-day using Chrome as my main browser, or jam prickly-pears into both my eyes at once.
I think I'll stay with Firefox until Chrome gets out of beta. Which (if it's like Gmail) it never will.
Friday, August 29, 2008
Pretty-print serialized DOM
Another great Mozilla feature: pretty-format a serialized DOM tree. The following code will serialize an entire web page and pretty-format the markup:
As mentioned in my earlier post about XMLSerializer, the XML you get isn't perfect: element names come out ALL CAPS for some weird reason. And you get a bunch of automatic entity substitutions, most of which you probably want, others of which will simply break things if you try to deserialize the text back into a DOM later. (Forget about easy roundtripping.) But overall, it's a really useful trick.
I was hoping maybe this trick would also (as a free bonus) pretty-format any embedded scripts inside CDATA sections, but of course no such luck. In fact, due to automatic entity substitution, <
Details here:
http://code.google.com/apis/chart/#encoding_data
Google Charts is a simple REST-style API for creating graphs and charts on the fly, such as this one:
Details here:
http://code.google.com/apis/chart/#encoding_data
Monday, July 23, 2007
Menus as Non-Modal Dialogs
I was thinking the other day about how best to keep the details of application logic hidden from Swing widgets (in the spirit of Martin Fowler's Presentation Model), the main intuition being that a user app can/should (arguably) be modeled as a set of nonvisual capabilities to which utterly dumb GUI widgets can later be mapped. Achieving this in a clean way is incredibly difficult. (Or at least for me it is.)
I had an epiphany of sorts. When you design a standalone user app (a menu-driven desktop app), what's the first piece of UI you design? The menu system. And what is a menu? In Swing (Java), it's a series of nested buttons. (JMenu and JMenuItem inherit from javax.swing.AbstractButton.)
The menubar never goes away. Some apps let you hide it, in which case it's merely made invisible (it doesn't actually get released from memory). There's a name, of course, for collections of buttons that never go away: a non-modal dialog. My epiphany was/is that a menu system is a collection of non-modal dialogs. (And I hate non-modal dialogs, both as a user and as a programmer.)
In the typical menu-driven app, menus are non-modal dialogs in which each button "knows too much" about deep application internals. The ever-changing state of the entire app is controlled through this collage of interdependent buttons, and managing the underlying ill-formed dependency graph is difficult, and this is why menu apps are a pain the ass to write.
I had an epiphany of sorts. When you design a standalone user app (a menu-driven desktop app), what's the first piece of UI you design? The menu system. And what is a menu? In Swing (Java), it's a series of nested buttons. (JMenu and JMenuItem inherit from javax.swing.AbstractButton.)
The menubar never goes away. Some apps let you hide it, in which case it's merely made invisible (it doesn't actually get released from memory). There's a name, of course, for collections of buttons that never go away: a non-modal dialog. My epiphany was/is that a menu system is a collection of non-modal dialogs. (And I hate non-modal dialogs, both as a user and as a programmer.)
In the typical menu-driven app, menus are non-modal dialogs in which each button "knows too much" about deep application internals. The ever-changing state of the entire app is controlled through this collage of interdependent buttons, and managing the underlying ill-formed dependency graph is difficult, and this is why menu apps are a pain the ass to write.
Wednesday, March 28, 2007
Friday, March 09, 2007
Fractal-Dimensional Transforms
I was on the back porch thinking about image transforms the other morning, and it occurred to me that we just assume that many types of data are either one-dimensional, two-dimensional, or three-dimensional, etc. (with nothing in between), despite the fact that fractals are everywhere in nature. And we apply transformations and convolutions (2-dimensional DCT, in the case of JPEG) to the data without regard for the data's true dimensionality.
So I'm left wondering: how do you do, say, a 2.2D DCT or DFT? What if I want to convolve the fractal residue of a time series?
So I'm left wondering: how do you do, say, a 2.2D DCT or DFT? What if I want to convolve the fractal residue of a time series?
Wednesday, January 24, 2007
Privacy Leakage Patent
Identity data-mining disturbs me. What disturbs me even more is that you can patent a technique for, say, guessing someone's age based on their purchasing habits (which is what Amazon has succeeded in doing).
Evil, evil, evil.
Evil, evil, evil.
Thursday, January 11, 2007
jrunscript
It turns out JDK 6 comes with a JavaScript console facility so that you can play with Rhino interactively from a command line. Look for a file called jrunscript.exe in your JDK's /bin directory.
A pretty good article on Java/JavaScript integration in Java 6 can be found on the Sun Developer Network site right here.
A pretty good article on Java/JavaScript integration in Java 6 can be found on the Sun Developer Network site right here.
Wednesday, January 03, 2007
OpenOffice.org Dev Hurdles
Over the holidays I decided to wade into the murky waters of OpenOffice development. I was quickly up to my neck in mud.
It turns out I'm not the only one. Key OOo insiders are acutely aware that the barriers to participation in OOo development are way too high (keeping community participation in OOo development way too low).
It's not just that finding all the code is hard or that the C++ codebase is around 7 million lines of code. It's that a full compile-and-build of OOo takes 15 hours on a typical desktop PC. If you can get it to build at all.
Some of the entry-barrier issues are more fully discussed in Jens Heiner Rechtien's 31 Dec 2006 blog.
It turns out I'm not the only one. Key OOo insiders are acutely aware that the barriers to participation in OOo development are way too high (keeping community participation in OOo development way too low).
It's not just that finding all the code is hard or that the C++ codebase is around 7 million lines of code. It's that a full compile-and-build of OOo takes 15 hours on a typical desktop PC. If you can get it to build at all.
Some of the entry-barrier issues are more fully discussed in Jens Heiner Rechtien's 31 Dec 2006 blog.
Friday, December 22, 2006
JRuby for OpenOffice Development
Juergen Schmidt (who gave a talk at last week's Javapolis conference on why Java programmers should get more involved with OpenOffice) blogged yesterday about the prospect of using JRuby for OOo development:
I also met two Sun colleagues Thomas Enebo and Charles Oliver Nutter, two of the JRuby core developers, and brainstormed a little bit with them about the support of JRuby in OpenOffice.org. JRuby comes directly with Java in the future and the integration work into NetBeans is ongoing. So it would be great to have a good support for JRuby from UNO as well. JRuby as one of the main scripting languages for OpenOffice.org with a smart integration in NetBeans is a really cool idea and I hope that we can deliver something in this direction. We will see what's possible and when!
Some interesting podcasts from Javapolis (including quite a few on agile development) are here.
I also met two Sun colleagues Thomas Enebo and Charles Oliver Nutter, two of the JRuby core developers, and brainstormed a little bit with them about the support of JRuby in OpenOffice.org. JRuby comes directly with Java in the future and the integration work into NetBeans is ongoing. So it would be great to have a good support for JRuby from UNO as well. JRuby as one of the main scripting languages for OpenOffice.org with a smart integration in NetBeans is a really cool idea and I hope that we can deliver something in this direction. We will see what's possible and when!
Some interesting podcasts from Javapolis (including quite a few on agile development) are here.
Wednesday, November 08, 2006
Project Tamarin
By now everyone has heard the news that Adobe will donate code for its ActionScript VM to the Mozilla Foundation for use in Firefox. For a quick snapshot of what's going on, see:
The ability to run JIT-compiled JavaScript on a VM is killer, because it knocks down all complaints of JS being slow. And it also opens the door to ultra-fast JS on the server (and pure-JS doublesided AJAX).
The VM architecture looks like this:

But again, it's not really about .swf, it's about compiling JS2 into bytecode, which is an incredibly important advancement.
Brendan Eich held an IRC chat yesterday in which he and Kevin Lynch of Adobe fielded questions about Tamarin. A few interesting factoids came to light:
- Tamarin project page
- Mozilla foundation press release
- Executive summary and analysis by Frank Hecker of the Mozilla Foundation
- Benchmark comparisons of Tamarin versus JavaScript performance (awesome graph)
The ability to run JIT-compiled JavaScript on a VM is killer, because it knocks down all complaints of JS being slow. And it also opens the door to ultra-fast JS on the server (and pure-JS doublesided AJAX).
The VM architecture looks like this:

But again, it's not really about .swf, it's about compiling JS2 into bytecode, which is an incredibly important advancement.
Brendan Eich held an IRC chat yesterday in which he and Kevin Lynch of Adobe fielded questions about Tamarin. A few interesting factoids came to light:
- Acrobat's JS engine will move from Spidermonkey to Tamarin.
- The expansion factor for jitting bytecode to x86 is roughtly from 5X for strongly typed, early-bindable code, to 20X for loosly typed, unbindable code. Thus, you pay a price in memory hunger for the ability to JIT-compile JS, but JS2's new typing system mitigates it somewhat.
- The Tamarin codebase comprises 135,000 lines of C++ (smaller than I would have thought). This is sure to grow but Brendan Eich indicated very strongly that Firefox needs to shrink, not grow, hence there will be pressure to keep Tamarin as lean and efficient as possible.
- Tamarin is not 64-bit-ready. But if the project gets the kind of (huge) traction that it appears it will get in the community, the "64-bit Flash" question may finally get solved. And maybe ES4/JS2 will get a "long" data type in addition to int/uint/double. ;^)
Thursday, November 02, 2006
New ECMA Draft
ECMA's 262 revision-4 working group just published a draft spec of what will hopefully become (by next summer) JavaScript 2.0. This is the first major upgrade to the JavaScript language in almost a decade. Guaranteed to take Ajax to the next level.
Tuesday, October 24, 2006
Fuzzing
I learned about fuzzing today. Think of it as fault discovery by random input. The underlying assumption: If unexpected input makes an app produce unexpected behavior, you're hosed. Hackers rely on fault-injection to find vulnerabilities. QA can use it to find bugs.
There's a list of open-source fuzzers here.
There's a list of open-source fuzzers here.
Friday, October 06, 2006
Adobe Ditches SVG Viewer
Friend and colleague Pascal Barbier pointed out to me the other day that Adobe will soon stop supporting/developing its free SVG Viewer plug-in for web browsers. As of January 2007, Adobe will simply abandon the SVG Viewer.
Although this move is certainly consistent with Adobe's longterm Flash strategy, I don't think it's motivated by anything Flashy. (Call me naïve.) Adobe already supports SVG in most of its products and will soon leverage SVG in Acrobat via PxDF. Support for SVG goes on. Just not in the browser.
The move mostly affects Internet Explorer users, since SVG support is native in Firefox. But let's face it, how many IE users even have the Adobe plug-in? How many IE users have ever tried to view an SVG page? (How many can even spell SVG?)
I don't blame Adobe (or any company) for abandoning a development-intensive non-product that requires huge gobs of time and money to support. But that raises the question: Why doesn't Adobe donate its Viewer code to the open-source community? This is a great opportunity, after all, for Adobe to win badly needed points in the F/OSS world. From a P.R. standpoint, it's Something Very Good.
Surely they'll figure it out.
Although this move is certainly consistent with Adobe's longterm Flash strategy, I don't think it's motivated by anything Flashy. (Call me naïve.) Adobe already supports SVG in most of its products and will soon leverage SVG in Acrobat via PxDF. Support for SVG goes on. Just not in the browser.
The move mostly affects Internet Explorer users, since SVG support is native in Firefox. But let's face it, how many IE users even have the Adobe plug-in? How many IE users have ever tried to view an SVG page? (How many can even spell SVG?)
I don't blame Adobe (or any company) for abandoning a development-intensive non-product that requires huge gobs of time and money to support. But that raises the question: Why doesn't Adobe donate its Viewer code to the open-source community? This is a great opportunity, after all, for Adobe to win badly needed points in the F/OSS world. From a P.R. standpoint, it's Something Very Good.
Surely they'll figure it out.
Wednesday, September 27, 2006
Adobe PxDF
Word is slowly leaking out about Adobe's planned XML grammar for PDF (code name Mars, so think SVG-in-a-space-suit).
The new XML-based PDF format ("PxDF") is basically SVG with some extensions to allow for various kinds of embedded resources and references thereto. Recall that PDF can contain form widgets, annotations, JavaScript, and other flotsam. You can specify some of these items as reusable resources, refer to them using XLink, ball everything up into a zip archive, and expect Acrobat 8.x to deal with it (possibly as early as November).
The new XML-based PDF format ("PxDF") is basically SVG with some extensions to allow for various kinds of embedded resources and references thereto. Recall that PDF can contain form widgets, annotations, JavaScript, and other flotsam. You can specify some of these items as reusable resources, refer to them using XLink, ball everything up into a zip archive, and expect Acrobat 8.x to deal with it (possibly as early as November).
Tuesday, September 26, 2006
Concurrent JavaScript
There is no such thing, I just made that phrase up. But it seems inevitable. The really ancient concept of futures (from concurrent programming languages) has interesting pointcuts in AJAX development, so I'm forced to give renewed attention to things like Narrative JavaScript, jwacs, and Chris Double's admirable forays into JavaScript future and promise support. All really awesome stuff. It's always interesting to see the JS community outrunning Eich and ECMA on occasion.
If you're still scratching your head, I recommend spending some time with Alice.
If you're still scratching your head, I recommend spending some time with Alice.
Monday, September 18, 2006
How to Make SVG Slower
It's called Dojo2D.
Try this test page. On my machine, Firefox 1.5.0.7 will load that page in 8 seconds, which is about 7.99 billion clock cycles too many, for my taste. But Internet Explorer locks up for a full 30 seconds (consistently, every time) when trying to load the page.
I'm happy to see IE users brutally punished in this fashion, of course. But honestly, this has to be some kind of sick, sick joke, right?
Try this test page. On my machine, Firefox 1.5.0.7 will load that page in 8 seconds, which is about 7.99 billion clock cycles too many, for my taste. But Internet Explorer locks up for a full 30 seconds (consistently, every time) when trying to load the page.
I'm happy to see IE users brutally punished in this fashion, of course. But honestly, this has to be some kind of sick, sick joke, right?
Wednesday, September 13, 2006
Dynamic Languages on the JVM
In certain parts of the world it is said that there are three things that can never be known to any man: The hour of one's death, the true name of Allah, and the current status of JSR-292.
Nevertheless, it seems clear that the fruits of JSR-292 will be folded into Dolphin (Java 7, to be released in 2008).
Let's see, that's (how many is it?) thirteen years that it took Sun to realize some people may actually want to do serious programming in something other than you-know-what.
Nevertheless, it seems clear that the fruits of JSR-292 will be folded into Dolphin (Java 7, to be released in 2008).
Let's see, that's (how many is it?) thirteen years that it took Sun to realize some people may actually want to do serious programming in something other than you-know-what.
Tuesday, September 05, 2006
JavaScript 1.7 is in Firefox Beta
True to Brendan Eich's earlier timeline predictions, Firefox 2b2 now implements JavaScript 1.7, with such new features as:
Of course, none of this will be in Internet Explorer any time soon. Once again, the Firefox folks have made a bold move into unexplored territory, leaving the safe, comfortable, Web 1.0 weenie-world of Ballmer & Co. ever further behind.
- Array comprehensions
- Fine-tuned scoping via "let" expressions
- Multivalued function returns
Of course, none of this will be in Internet Explorer any time soon. Once again, the Firefox folks have made a bold move into unexplored territory, leaving the safe, comfortable, Web 1.0 weenie-world of Ballmer & Co. ever further behind.
Thursday, August 31, 2006
Free Security Book
I owe this one to my colleague Stephen Holmes in Dublin, who today pointed me at the freely downloadable version of Ross Anderson's superb Security Engineering. This is without a doubt one of the finest free online books (of any kind) that I've ever seen, beyond being a celebrated classic in security circles for several years now. The author is a Professor of Security Engineering at the University of Cambridge's Computer Laboratory. Even so, he writes entertainingly. ;^)
The chapters are individually downloadable, or you can shag the whole book. For a quick look, I recommend Chapter 11 (which had me utterly spellbound).
The chapters are individually downloadable, or you can shag the whole book. For a quick look, I recommend Chapter 11 (which had me utterly spellbound).
Tuesday, August 29, 2006
Dojo 2D
In the ever-widening quest for richer web widgets, the Dojo guys, it turns out, are considering implementing their own 2D graphics API. It would actually be a bunch of wrappers around SVG, VML, and Canvas methods, of course. The primary target is SVG.
Implementing this for even a small subset of SVG will be arduous. (The Flash ninjas must be laughing themselves sick right about now.) I'm tempted to dismiss Dojo 2D as a quixotic quest. But I also know AJAX developers are clamoring for just this sort of thing, and I'm sure Dojo 2D will be a scandalous success.
Performance is apt to be underwhelming (SVG is already sluggish enough without wrapper layers), but that's never stopped a market disruptor before, and anyway, Dr. Moore can't be far behind with the cure.
Implementing this for even a small subset of SVG will be arduous. (The Flash ninjas must be laughing themselves sick right about now.) I'm tempted to dismiss Dojo 2D as a quixotic quest. But I also know AJAX developers are clamoring for just this sort of thing, and I'm sure Dojo 2D will be a scandalous success.
Performance is apt to be underwhelming (SVG is already sluggish enough without wrapper layers), but that's never stopped a market disruptor before, and anyway, Dr. Moore can't be far behind with the cure.
Monday, August 28, 2006
Lightweight 3D in Java
I finally found the ultimate no-frills super-lightweight 3D library written in Java: Peter Walser's idx3d framework. (Freeware, of course.)
After playing with idx3d for a month, I'm still astonished at how much functionality Peter crammed into just 29 (count 'em) .java files. The code is streamlined and easy to follow (a rarity in 3D engines). No frills, no baroque overfactoring, no "let's be fully general so as to handle the occasional weird-ass edge-case even if it means slowing everything else down."
I've found the idx3d code to be extremely stable, reasonably fast (again, a rarity in Java 3D engines), and after 30 hours of flogging it mercilessly in Eclipse (on Novell SUSE Linux Enterprise Desktop), I have yet to see an OutOfMemoryError.
The most wonderful thing about Peter Walser's code is that it was written in Y2K (back when Java was lean and mean) and has very few JRE dependencies: you'll see an occasional java.util class, but for the most part, Walser's code files contain no imports. Which is astonishing.
If you're interested in 3D programming, check this thing out.
Wednesday, July 26, 2006
The Zen of Hashing
Hashing and hash algorithms are a pet interest of mine. Understanding hashing at a low level takes a fair amount of meditation. Most programmers are too busy for that. Thus hashing is not well understood outside of, say, cryptography circles.
As it turns out, the guy who did the amusing boredom-graph cartoon (see yesterday's blog) also has written one of the best overviews of hashing I've seen in a long time. Be sure to see his excellent Hash Functions and Block Ciphers page as well.
Study the material on Bob's site. Save yourself years of meditation.
As it turns out, the guy who did the amusing boredom-graph cartoon (see yesterday's blog) also has written one of the best overviews of hashing I've seen in a long time. Be sure to see his excellent Hash Functions and Block Ciphers page as well.
Study the material on Bob's site. Save yourself years of meditation.
Tuesday, July 25, 2006
A Timeless Graph

Bob Jensen created this wonderful graph, which confirms what I've long thought: boredom tends to be continuous over its range.
Monday, July 17, 2006
Making XML Smaller
In all the hand-wringing discussions about XML's verbosity that I've read over the years, I have yet to hear anyone suggest simply truncating all closing tags to </>. In other words, if you've got
why not just shorten it to
Verbose closing tags are a pure waste of space (albeit required by XML spec). Abbreviated closing tags don't make the file any less parsable. When the parser encounters </> it knows that the closure is at the nesting level of the previous opening tag. If not, the XML was not well-formed to begin with.
Verbose closing tags are just that. Unneeded verbosity.
<data>
<item>something</item>
</data>
why not just shorten it to
<data>
<item>something</>
</>
Verbose closing tags are a pure waste of space (albeit required by XML spec). Abbreviated closing tags don't make the file any less parsable. When the parser encounters </> it knows that the closure is at the nesting level of the previous opening tag. If not, the XML was not well-formed to begin with.
Verbose closing tags are just that. Unneeded verbosity.
Wednesday, June 28, 2006
When Identity Theft is not Theft
Two years from now, it will not be necessary to steal anyone's identity. Web surfers will have given away more personal info to the world than even the greediest thief would ever want to rip off by illegal means.
I'm not so much talking about static identity info, like your Social Security number (which will be worthless anyway in a year or two). I'm talking about the really interesting dirt. Your shopping habits, reading habits, movewatching habits, hobbies, favorite travel destinations, where you went to school, who you've worked for and how long you stayed at each job, and (let's not mince words) sexual preferences, who your friends are, the names and ages of your children. Most of this info can be scraped, right now today, from blog bios, online resumes, mySpace profiles, tag-sharing sites, social networking sites (like linkedin.com), and photo-sharing sites. Your info is out there. You put it there yourself.
And the bad part is, there's no taking it back. Google archives old pages. So does the Wayback Machine.
You're leaking personal info to the world every time you use an online service of any kind. Particularly the spate of Web 2.0 applications offering free online word processing, spreadsheets, chats, etc. Those are hosted apps. Most of the hosts are trustworthy (arguably), but the hosts tend to archive chatlogs and other interaction records, which means the storage media on which that material is archived can be stolen or lost just like the Veteran's Administration guy's laptop.
Or it can be inadvertantly indexed by Google and exposed to searchers (as has happened with supposedly private test scores).
The outflux of identity info onto the Web is massive, and it's accelerating daily, driven largely by the explosion in popularity of "Web 2.0" apps.
All of which is great news to the National Security Agency, who by some accounts are sifting through your data right now.
I'm not so much talking about static identity info, like your Social Security number (which will be worthless anyway in a year or two). I'm talking about the really interesting dirt. Your shopping habits, reading habits, movewatching habits, hobbies, favorite travel destinations, where you went to school, who you've worked for and how long you stayed at each job, and (let's not mince words) sexual preferences, who your friends are, the names and ages of your children. Most of this info can be scraped, right now today, from blog bios, online resumes, mySpace profiles, tag-sharing sites, social networking sites (like linkedin.com), and photo-sharing sites. Your info is out there. You put it there yourself.
And the bad part is, there's no taking it back. Google archives old pages. So does the Wayback Machine.
You're leaking personal info to the world every time you use an online service of any kind. Particularly the spate of Web 2.0 applications offering free online word processing, spreadsheets, chats, etc. Those are hosted apps. Most of the hosts are trustworthy (arguably), but the hosts tend to archive chatlogs and other interaction records, which means the storage media on which that material is archived can be stolen or lost just like the Veteran's Administration guy's laptop.
Or it can be inadvertantly indexed by Google and exposed to searchers (as has happened with supposedly private test scores).
The outflux of identity info onto the Web is massive, and it's accelerating daily, driven largely by the explosion in popularity of "Web 2.0" apps.
All of which is great news to the National Security Agency, who by some accounts are sifting through your data right now.
Tuesday, June 20, 2006
RoR Gaining on Atkins Diet
It's official: Ruby on Rails is about to edge past the Atkins diet for popularity.
RoR has also overtaken West Nile virus.
It has to be true. I saw it on Google Trends.
RoR has also overtaken West Nile virus.
It has to be true. I saw it on Google Trends.
Thursday, June 08, 2006
Spring Framework Backlash
It's refreshing (and healthy, I think) to see open, honest debate erupt over the usefulness of IoC frameworks, in particular the certifiably trendy Spring framework. I refer to Bob Lee's gratifyingly blunt I Don't Get Spring.
Surprisingly, most of the comments at the end of Lee's blog are dispassionate, logical, and in full agreement with Lee's premise, which (to oversimplify) is that Spring is cryptic, over-architected, and malodorous at a code level (among other felonies), begging the question of why anyone would use it.
I can understand why Lee would feel that way. He's right on most counts. Spring is indeed byzantine and heavy (as most things surrounding J2EE are), and buries too many dependencies in XML. But that doesn't mean Spring doesn't have its legitimate uses.
Surprisingly, most of the comments at the end of Lee's blog are dispassionate, logical, and in full agreement with Lee's premise, which (to oversimplify) is that Spring is cryptic, over-architected, and malodorous at a code level (among other felonies), begging the question of why anyone would use it.
I can understand why Lee would feel that way. He's right on most counts. Spring is indeed byzantine and heavy (as most things surrounding J2EE are), and buries too many dependencies in XML. But that doesn't mean Spring doesn't have its legitimate uses.
Monday, June 05, 2006
JVM as Web-Service Endpoint
Imagine if you could ping a running JVM over HTTP to obtain realtime diagnostic info. That seems to be what Sun has in mind with U.S. Patent 7,039,691, "Java Virtual Machine Configurable to Perform as a Web Server," granted to Sun Microsystems last month.
Abstract: A virtual machine, such as a Java(tm) virtual machine, is configured to operate as a web server so that users, using a browser, can make general-purpose inquiries into the state of the virtual machine or, in some cases, mutate the state of the VM. A "browsable" VM contains a network traffic worker, such as an HTTP thread, a services library, and a VM operations thread, which is an existing component in most virtual machines. The network traffic worker and the VM operations thread communicate through a request data structure. The VM operations thread generates a reply to the request upon receiving a request data structure from the traffic worker. Such a reply can be in the form of an HTTP response containing HTML or XML pages. These pages are transmitted back to the browser/user by the network traffic worker.
Abstract: A virtual machine, such as a Java(tm) virtual machine, is configured to operate as a web server so that users, using a browser, can make general-purpose inquiries into the state of the virtual machine or, in some cases, mutate the state of the VM. A "browsable" VM contains a network traffic worker, such as an HTTP thread, a services library, and a VM operations thread, which is an existing component in most virtual machines. The network traffic worker and the VM operations thread communicate through a request data structure. The VM operations thread generates a reply to the request upon receiving a request data structure from the traffic worker. Such a reply can be in the form of an HTTP response containing HTML or XML pages. These pages are transmitted back to the browser/user by the network traffic worker.
Thursday, June 01, 2006
Metacompilers and Checkers
Imagine if your favorite compiler were extensible in such a way that you could add your own custom static checks, to find bugs of a special kind that you need to be able to find but that your compiler is too stupid to know about out-of-the-box. That's the intuition behind metacompiler (MC) technology. You write a checker, which is a snap-in that knows how to check for whatever kind of syntactic or other blunder you care about, and add it to the compiler. Then the compiler knows how to emit new warnings or error messages.
A checker can be as simple or as sophisticated as you want it to be. Maybe you want to be sure that every call to foo( ) is eventually followed by a corresponding call to bar( ). Or you may have application-specific security concerns (in the context of export laws, perhaps). Or you may have company policy around certain syntactical idiosyncracies that would only be of specific concern to your department or your company.
Interestingly, the Stanford MC guys did a pass against the Linux kernel using their own custom checkers plugged into their own MC-aware gcc and found almost 600 potentially serious bugs, most of which have not been looked into yet (if you believe Coverity's latest findings).
A checker can be as simple or as sophisticated as you want it to be. Maybe you want to be sure that every call to foo( ) is eventually followed by a corresponding call to bar( ). Or you may have application-specific security concerns (in the context of export laws, perhaps). Or you may have company policy around certain syntactical idiosyncracies that would only be of specific concern to your department or your company.
Interestingly, the Stanford MC guys did a pass against the Linux kernel using their own custom checkers plugged into their own MC-aware gcc and found almost 600 potentially serious bugs, most of which have not been looked into yet (if you believe Coverity's latest findings).
Wednesday, May 31, 2006
Brendan Eich JS-Futures Update
If you're a serious user of Javascript, you must stop reading now and immediately go to:
http://developer.mozilla.org/presentations/xtech2006/javascript/
http://developer.mozilla.org/presentations/xtech2006/javascript/
Tuesday, May 23, 2006
Continuations Thought Harmful
In late March, I blogged a couple times about continuations. Suddenly, Sun's Tim Bray and Gilad Bracha have broached the subject, stimulating much heated discussion in the blogosphere. Much heat, little useful work at the crankshaft.
Of all the recent posts on this surprisingly controversial subject, I find Curtis Poe's the most clueful.
Of all the recent posts on this surprisingly controversial subject, I find Curtis Poe's the most clueful.
Friday, May 19, 2006
Putting a Face on AJAX
This online facial-compositing app is the weirdest thing ever. It lets you merge facial features (from actual photos) together to create your own police composite sketches, kind of.
I spent 30 minutes fooling with it. Everything came out looking like Pia Zadora.
I spent 30 minutes fooling with it. Everything came out looking like Pia Zadora.
Wednesday, May 10, 2006
AJAX as a Man-in-the-Middle Architecture
A friend at work showed me Gabbly, which is an AJAX IM-chat pushlet that gives the appearance of putting a chat window over the top of any web page you choose (kind of like gmail-chat).
Odd thing is, it even worked for us when we set the URL to a secure wiki page inside the company firewall.
We promptly exited our Gabbly session and began chatting about it on Groupwise Messenger (our company standard). The whole experience was freaky and left us with serious security worries. Especially when Firefox crashed on me within minutes of leaving the Gabbly-iframed page.
According to a discussion at Ajaxian, Gabbly is indeed vulnerable to cross-site scripting attacks. But I'm equally worried about things like Gabbly JS code being able to walk up to the _top frame and read a supposedly secure container page (not to mention issues around Gabbly.com slurping our plaintext conversation in real time). Likewise, there's nothing stopping the Gabbly server from stomping on any Javascript code that's already in-scope in your page.
The thought of people using a 3rd-party-hosted chat app like this at work scares the hell out of me.
But that's the trouble with things like shorttext.com, ajaxwrite.com, and other free-neato-trendy AJAX "services": They require you to rely on the trustworthiness of the host. I put it too delicately. These are man-in-the-middle applications.
User beware.
Odd thing is, it even worked for us when we set the URL to a secure wiki page inside the company firewall.
We promptly exited our Gabbly session and began chatting about it on Groupwise Messenger (our company standard). The whole experience was freaky and left us with serious security worries. Especially when Firefox crashed on me within minutes of leaving the Gabbly-iframed page.
According to a discussion at Ajaxian, Gabbly is indeed vulnerable to cross-site scripting attacks. But I'm equally worried about things like Gabbly JS code being able to walk up to the _top frame and read a supposedly secure container page (not to mention issues around Gabbly.com slurping our plaintext conversation in real time). Likewise, there's nothing stopping the Gabbly server from stomping on any Javascript code that's already in-scope in your page.
The thought of people using a 3rd-party-hosted chat app like this at work scares the hell out of me.
But that's the trouble with things like shorttext.com, ajaxwrite.com, and other free-neato-trendy AJAX "services": They require you to rely on the trustworthiness of the host. I put it too delicately. These are man-in-the-middle applications.
User beware.
Thursday, May 04, 2006
Stallman on MSWord Attachments
Recently a friend reminded me of this discussion (old but still relevant) by Richard Stallman of Word attachments and why they're basically the work of Satan.
Friday, April 28, 2006
YAMSWK (Yet Another M$Word-Killer)
My nomination in the category of "best AJAX-based Word workalike" for this week is Zoho Write, one of a suite of impressive Zoho apps. It took a while (30sec) for Firefox to pull down all 51 external .js scripts, but when the app opened, it was a thing of beauty. Imagine my abject stupefaction upon using the Import button to suck in a complex (many tables, many fonts) .sxw file, and seeing it open without errors, looking just the way it should! Yes, Zoho Write handles OpenOffice files. Just as Nature intended.
Unlike a lot of Web2.0 apps, Zoho is not the product of a teenager locked in a closet. Behind the Z-suite is a ten-year-old company, AdventNet, with offices around the world.
This is starting to get exciting.
Unlike a lot of Web2.0 apps, Zoho is not the product of a teenager locked in a closet. Behind the Z-suite is a ten-year-old company, AdventNet, with offices around the world.
This is starting to get exciting.
Thursday, April 27, 2006
109 Laughs
Assignment: Write a 3-dozen-line XML file that will lock up any modern browser.
Answer: See The Billion Laughs attack.
Answer: See The Billion Laughs attack.
Tuesday, April 25, 2006
Big Blue: Leaders in Teleportation?
No one will ever accuse Big Blue of clairvoyance. But they just may have a handle on teleportation.
Just for fun, go to IBM's site and do a search on "teleportation."
You'll get 19 hits.
Just for fun, go to IBM's site and do a search on "teleportation."
You'll get 19 hits.
IBM Game Research
The IBM Systems Journal is one of those rare publications that you wish would come out more frequently (just the opposite of drain-clogs like eWeek, which I wish would come out half as often). The journal's content is uniformly excellent, and the subject matter frequently delights. Such is the case with Volume 45, Number 1, 2006, devoted entirely to (of all things) Online Game Technology.
Wednesday, April 19, 2006
Stacklessness
I blogged a while ago about continuations, which may play a role in making AJAX scale well. Today I learned that continuations have been implemented (on an experimental basis) in Mono's virtual machine.
I'm not a Python person so I didn't realize (until after Googling around a bit) that the so-called microthreads of Stackless Python are a way of achieving the same thing.
The key intuition behind stacklessness is that you move everything that would normally be kept on "the stack" out to a data structure on the heap. Therefore one thread can jump between potentially tens of thousands of execution frames.
The ability to run huge numbers of processes concurrently is obviously important in many kinds of applications. If AJAX becomes another driver of this technology, it'll be interesting to see who'll be first to implement a stackless-Java virtual machine.
I'm not a Python person so I didn't realize (until after Googling around a bit) that the so-called microthreads of Stackless Python are a way of achieving the same thing.
The key intuition behind stacklessness is that you move everything that would normally be kept on "the stack" out to a data structure on the heap. Therefore one thread can jump between potentially tens of thousands of execution frames.
The ability to run huge numbers of processes concurrently is obviously important in many kinds of applications. If AJAX becomes another driver of this technology, it'll be interesting to see who'll be first to implement a stackless-Java virtual machine.
Thursday, April 13, 2006
XQuery Engines Compared
While digging around for news/views on JRockit, I happened to stumble onto an XQuery-engine comparative evaluation by (of all people) the Washington Publishing Company, a seller of EDI and HIPAA publications. In case you don't have time to wade through the full study (which is a good read, incidentally), the bottom line is, for maximum performance, robustness, and flexibility, you want the Saxon engine running atop the BEA JRockit JVM.
Wednesday, April 05, 2006
How to Comment AJAX Code
Lately I've been perusing some of Oracle's Javascript code from its ADF Faces. I see that it's extraordinarily well commented.
I'm looking at it in OpenOffice, so just for fun, I tell OOo to do a regex-search on
//.*$
and globally replace that with zilch, thereby wiping out all comment lines.
The result? With comments, Oracle's Core.js file is 140 KB. Without comments: 95 KB. Imagine: almost 50K of comments in a 140K file.
I don't think I've ever seen such well-commented code in any language, ever.
Kas
I'm looking at it in OpenOffice, so just for fun, I tell OOo to do a regex-search on
//.*$
and globally replace that with zilch, thereby wiping out all comment lines.
The result? With comments, Oracle's Core.js file is 140 KB. Without comments: 95 KB. Imagine: almost 50K of comments in a 140K file.
I don't think I've ever seen such well-commented code in any language, ever.
Kas
Oracle AJAX Best Practices from 2002
AJAX-the-acronym has been around only since 2005, but (as many observers have pointed out) the underlying techniques have been around much longer.
It turns out Oracle has been publishing its own best-practices advice on "Partial Page Rendering" since 2002.
For the very latest Oracle thoughts on AJAX, I suggest reading the comments in their ADF Javascript code.
It turns out Oracle has been publishing its own best-practices advice on "Partial Page Rendering" since 2002.
For the very latest Oracle thoughts on AJAX, I suggest reading the comments in their ADF Javascript code.
Subscribe to:
Posts (Atom)

