Sunday, May 31, 2009

Fractal Imaging


The other day, a friend of mine made me very happy. He returned my only copy of my favorite book, Fractal Imaging, by Ning Lu. This book has long been my favorite technical book, bar none, but I am beginning to think it might just be my favorite book of any kind.

"Virtue, like a river, flows soundlessly in the deepest place." You don't expect to encounter this sort of statement in an übergeek tome of this magnitude, and yet Lu scatters such proverbs (as well as quotations from Nietzsche, Joseph Conrad, Ansel Adams, Shakespeare, Jim Morrison, and others) throughout the text. This alone, of course, makes it quite an unusual computer-science book.

But the best part may be Lu's ability to blend the oft-times-sophisticated concepts of measure theory (and the math behind iterated function systems) with beautiful graphs, line drawings, transformed images (some in color; many downright spectacular), and the occasional page or two of C code. The overall effect is mesmerizing. Potentially intimidating math is made tractable (fortunately) through Lu's clear, often inspiring elucidations of difficult concepts. Ultimately, the patient reader is rewarded with numerous "Aha!" moments, culminating, at last, in an understanding for (and appreciation of) the surpassing beauty of fractal image transformation theory.

What is fractal imaging? Well, it's more than just the algorithmic generation of ferns (like the generated image above) from non-linear equation systems. It's a way of looking at ordinary (bitmap) images of all kinds. The hypothesis is that any given image (of any kind) is the end-result of iterating on some particular (unknown) system of non-linear equations, and that if one only knew what those equations are, one could regenerate the image algorithmically (from a set of equations) on demand. The implications are far-reaching. This means:

1. Instead of storing a bitmap of the image, you can just store the equations from which it can be generated. (This is often a 100-to-1 storage reduction.)
2. The image is now scale-free. That is, you can generate it at any scale -- enlarge it as much as you wish -- without losing fidelity. (Imagine being able to blow up an image onscreen without it becoming all blocky and pixelated.)

Georgia Tech professor Michael Barnsley originated the theory of fractal decomposition of images in the 1980s. He eventually formed a company, Iterated Systems (of which Ning Lu was Principal Scientist), to monetize the technology, and for a while it looked very much as if the small Georgia company would become the quadruple-platinum tech startup of the Eighties. Despite much excitement over the technology, however, it failed to draw much commercial interest -- in part because of its computationally intensive nature. Decompression was fast, but compressing an image was extremely slow (especially with computers of the era), for reasons that are quite apparent when you read Ning Lu's book.

Iterated Systems eventually became a company called MediaBin, Inc., which was ultimately acquired by Interwoven (which, in turn, was recently acquired by Autonomy). The fractal imaging technology is still used in Autonomy's MediaBin Digital Asset Management product, as (for example) part of the "similarity search" feature, where you specify (through the DAM client's GUI) a source image and tell the system to find all assets in the system that look like the source image. Images in the system have already been decomposed into their fractal primitives. When the source image's primitives (which, remember, are scale-free) are known, they can be compared against sets of fractal shards in the system. When the similarity between shard-sets is good enough, an "alike" image is presumed to have been found.

It's a fascinating technology and one of the great computer-imaging developments of the 20th century, IMHO. If you're into digital imaging you might want to track down Ning Lu's book. A 40-page sample of it is online here. I say drop what you're doing and check it out.