Friday, November 26, 2010

Why Microsoft Wants Novell's Patents

On Monday, Novell let it be known that it would be acquired by Attachmate Corporation in a deal worth $2.2 billion. Meanwhile, in a Form 8-K filing with the SEC, Novell stated that it "will sell to CPTN all of Novell's right, title and interest in 882 patents ... for $450 million in cash." CPTN Holdings LLC is a consortium of technology companies organized by Microsoft.

Immediately, people began to speculate that the reason Microsoft would bid such an enormous amount of money to obtain Novell's patent portfolio (which, by the way, comes to only 462 issued U.S. Patents; the 882 figure represents applied-for patents as well as issued patents) is to get its hands on the intellectual property around UNIX. (Novell acquired UNIX from AT&T in the 1990s.)

But it now appears that Novell will not be selling UNIX patents as part of the CPTN deal. So the $450 million question is: What, exactly, is Microsoft (via CPTN) paying all that money for?

I'll offer my own speculation. (Disclosure: In 2006 and 2007, I was a member of Novell's Inventions Committee -- the company's internal patent-oversight board. I don't maintain "special connections" with the Committee, however, nor do I pretend to speak for Novell.) If you look at Novell's patent portfolio as a whole -- and in particular, if you look at the bulk of the work done in the past five years -- you can't help but notice that the single largest category of inventions has to do with security.

If you go to the USPTO website and so a search on patents with "security," "trust," or "authentication" in the Abstract, where Novell is the Assignee, you'll come up with 60 hits. The search query I used was:

(((ABST/security OR ABST/authentication) OR ABST/trust) AND AN/Novell)

If you do a search on ABST/encryption, you'll get another 12 hits. That's 72 hits out of 462 granted patents (roughly 16% of the total) having to do with encryption or security.

Microsoft is well aware of its lagging reputation in matters involving security. And the company well knows that the success of its initiatives in cloud computing, collaboration, and social networking will depend, in large measure, on whether it can present a credible security story to customers. There's a lot at stake (to put it mildly). Compared to the size of the cloud computing, collab, and social markets, $450 million is a pittance.

How good are Novell's security patents? That's another question. Many (not all) of them are genuinely clever. Exactly which ones Microsoft has its eye on, though, is a secret probably only a few people in Redmond know.

Sunday, November 21, 2010

Getting Started with Adobe AIR

It seems I'm always late to a good party. Yesterday, I finally did something I've been meaning to do for, oh, at least two years: I compiled and ran my first Adobe AIR application. And in typical masochistic fashion, I decided to do it with Notepad as my code editor and command-line tools for compilation. It's not that I can't afford Dreamweaver or Flash Builder, mind you (I have both products and recommend them highly); it was more a matter of wanting to get dirt under my fingernails, so to speak. That's just how I am.

The whole process of downloading the AIR SDK, reading online code examples, and getting my first example up and running took a little less than an hour from start to finish. There were only a couple of rough spots (both easily resolved). The first was creating my own self-signed security certificate. I did this with the ADT tool that comes with the AIR SDK. The magic command-line incantation that worked for me was:

adt -certificate -cn SelfSign -ou KT -o "Kas Thomas" -c US 2048-RSA cert.p12 password1234

Naturally, you'll want to change some of the parameters (e.g., the ones with my name and initials, and the password) when you do this yourself. But running this command should produce a certificate named cert.p12 on your local drive, assuming adt.bat (Windows) is in your path.

For example code, I turned to the text editor example described here. I compiled the code with:

..\bin\adt -package -storetype pkcs12 -keystore ..\cert.p12 TextEditorHTML.air application.xml .

(running a command console from a location of C:\AIR\TextEditorHTML, with my certificate stored under C:\AIR). The first time I did this, I got an error of "File C:\AIR\TextEditorHTML\application.xml is not a valid AIRI or AIR file." If you get the "is not a valid AIRI or AIR file" error, it means you left the trailing period off the foregoing command line. (Note carefully the period after "application.xml" at the very end.)

And that was basically it. My first AIR app: done in under an hour. Now, as Shantanu Narayen says, "let the games begin!"

Thursday, November 18, 2010

The Strength of Weak Ties


Hydrogen bonds (dotted lines) are only about
5% as strong as covalent bonds (solid lines).


Last Saturday, there was a fascinating discussion on Twitter about the power of weak connections. It was a real-time Tweetup held under the banner of #ideachat, the latter being a monthly Twitter Chat focused on the process of ideation, held every second Saturday of the month at 9:00 a.m. EST. (Ideachat bills itself as "a Salon for Twitter Thinkers About Ideas." It is founded by Angela Dunn, Idea Designer and Digital Consultant, aka @blogbrevity.)

The discussion was loosely grounded in the work of Mark S. Granovetter, whose 1973 paper "The Strength of Weak Ties" (American Journal of Sociology, May 1973, pp. 1360-1380) is one of the most widely cited papers in sociology. (See also Granovetter's 1983 followup paper in Sociological Theory, "The Strength of Weak Ties: A Network Theory Revisited.")

I won't try to recap the whole discussion here, since you can read the full transcript online elsewhere. Suffice it to say that in little more than an hour, 92 people contributed 695 tweets on the subject of how weak ties contribute to the spread of ideas in social networks. The discussion seemed particularly apropos given that almost none of the discussants knew each other except through the casual, transient contact afforded by Twitter and TweetChat (the tool used by most participants in the discussion).

My main contribution to the discussion was to draw a parallel between weak social ties and the physical chemistry of hydrogen bonding. I pointed out that in chemistry, weak links (viz., hydrogen bonds) are responsible for much of what makes biomolecule behavior interesting. It's a hard point to try to make in 140 characters or less. But it's worth spending a minute thinking about.

In chemistry, there are several types of chemical bond. The strongest type is the covalent bond: This is the kind of bond that connects the various atoms in a molecule (such as the hydrogens to the oxygen in water). About 5% as strong as the covalent bond is the hydrogen bond, which represents the weak electrostatic pull between electron-rich atoms and electron-poor atoms of different molecules. About an order of magnitude weaker still is the van der Waals force between atoms. Hydrogen bonds and van der Waals interactions are transient in nature, whereas covalent bonds are (for all intents) permanent, or at least long-lasting.

It turns out that a lot of interesting chemical behavior arises from the short-lasting weak interactions that go under the name of hydrogen bonding. The concept of surface tension arises from it. Protein folding happens the way it does because of hydrogen bonding. The stickiness of adhesives is due to hydrogen bonding. (Epoxy, on the other hand, owes its strength to covalent bonds.)

At one point in the #ideachat session, I asked (rhetorically) which is more useful, Scotch tape or Krazy-Glue? Someone later suggested a better analogy would have been duct tape, or even PostIt notes (which famously rely on an adhesive that is almost -- but not quite -- ineffective). You can do a lot of useful things with Krazy-Glue (which relies on covalent bonds to get the job done), but I can think of at least 100 times more things you can do with duct tape. Tape is incredibly more versatile, even though the mechanism by which its adhesive works is fundamentally at least 20 times weaker than the mechanism behind Krazy-Glue.

In the same way, I tend to think that the weak ties engendered by things like Twitter tend, in the aggregate, to produce effects that are surprisingly far-reaching -- causing many tipping-points to be reached long before they otherwise would be.

Whether you agree with my physical-chemistry analogies or not, I encourage you to take part in the next #ideachat, which is scheduled to happen on the eleventh of December at 9:00 a.m. Eastern U.S. time. Mark your calendar. I'll see you there.

Monday, October 18, 2010

First impressions of Acrobat X

For the past several months, I've been privileged to be allowed to test Acrobat 10, which has now been released as the Acrobat X family. Now that I'm finally at liberty to discuss actual features in detail, I can give some impressions of the software. Overall, the news is good. Very good.

The biggest news is that by virtue of a serious UI makeover, Acrobat has gotten much easier to use; it no longer feels quite so heavy and monolithic. Adobe has done an excellent job of moving little-used commands out of view while putting more-frequently-used tools and commands in logical places (and letting the user configure toolbars as needed). There are now only 5 main menus instead of 10, for example. The product has scored a gigantic (and much needed) usability win, as far as I'm concerned.

The Save as Word functionality has undergone a significant, long-overdue improvement in quality.

Forms creation/editing is easier, thanks to the aforementioned UI overhaul. I'm getting things done in fewer clicks now. For heavier-duty form-design tasks, Acrobat Pro and higher (on Windows) will ship with LiveCycle Designer ES2. I'm of two minds about that. On the plus side, LiveCycle Designer offers superior forms-creation tools and comes with a nice assortment of prebuilt templates. As form designers go, LiveCycle's tooling is right up there with the best of the best. On the down side, forms you create with LiveCycle are (as before) not editable using the standard form-design tools of Acrobat. So you're stuck either in LiveCycle Designer mode or Acrobat-native form-design mode. And LiveCycle Designer makes it very hard to add scripts to form elements. I haven't tested the most recent Designer, but the version that shipped with Acrobat 9 has not proven (in my experience, at least) to be very stable, and on the whole, I remain somewhat disappointed with the relatively primitive integration between Acrobat and LiveCycle Designer. The sooner Adobe can make LiveCycle forms compatible with Acrobat, the better.

Acrobat X introduces a notion of Actions. The ability exists to standardize processes in an organization/department by combining multiple tasks into a single Action that can run on single or multiple files and that can be accessed through a single click. Users can author a new Action through File > PDF Actions > Create.

Enterprise customers of Acrobat X will no doubt laud the product's integration with SharePoint:

  • You can open files hosted on SharePoint from Acrobat or Reader's Open dialog by browsing to a mapped drive or a WebFolder under "My Network Places".
  • When a PDF is opened from SharePoint, you have the ability to independently check that PDF in and out, similar to Office, via an option in the File Menu.
  • SharePoint is accessible from all of Acrobat or Reader's Open and Save dialogs: e.g., if there’s a dialog that prompts you to browse for a file, you can browse to a SharePoint hosted file just like a local file. And if there’s a dialog that prompts you to save a file, you can save to SharePoint just like you can save to your local drive.
  • If the SharePoint system requires that version information be specified when the user checks in a PDF into SharePoint, Acrobat/Reader will prompt the user to provide that information.
The ability to save search results in PDF and CSV file formats is a nice plus, as is the new .xlsx export functionality.

Adobe Reader has been enhanced with the ability to create sticky notes and highlight text on PDF documents. Also, the Adobe Reader browser plug-in is now a 32/64-bit universal plug-in which supports Safari running either 64-bit (default) or 32-bit.

What's missing from Acrobat X? The JavaScript API still offers no Selection API. (I blogged about this before.) Also, the Net.HTTP API remains a disappointment: It's possible to do AJAX-like (asynchronous) POSTs programmatically, in JavaScript, but only from an application-scoped script (a so-called "folder-level" script), not a document-level script. And I couldn't get HTTP GET operations to work at all.

But overall, my quibbles with Acro X are few. On the whole, I think it's the best major new release of Acrobat to happen in many years, and customers should be quite happy with it.

Saturday, October 02, 2010

Google's WebP image format




Google has announced a new image format for the web, called WebP. Its advantage over JPEG? Better compression, of course. The above graph shows results obtained when compressing approximately "1 million images randomly sampled from a repository of images crawled from the web." Google's comparative study of WebP, JPEG 2000, and Re-JPEG can be found here. An image gallery is here.

According to Google, "WebP typically achieves an average of 39% more compression than JPEG and JPEG 2000, without loss of image quality ."

Converter code is available on the downloads section of the WebP open-source project page.

The WebP team is reportedly developing a patch to WebKit to provide native support for WebP in an upcoming release of Google Chrome. It's anyone's guess as to when (or whether) the format will be supported by other browsers, but it seems likely that Firefox, Opera, and Safari will follow suit. The question, of course, is what happens if Internet Explorer ignores the format altogether (as seems likely)? The answer, I think, is that IE continues on its trajectory of becoming less relevant by the day.

Wednesday, September 22, 2010

PaintbrushJS: A lightweight image-processing library

Developer Dave Shea has released PaintbrushJS, a lightweight image processing library that can apply a variety of filters to images on a web page.

Under the covers, PaintbrushJS uses the HTML5 canvas tag to implement its effects, automatically inserting canvas tags based on class names. You can choose effects and control their parameters by adding attributes to various tags. For example:

<img src="jordan.jpg"
width="200" height="133"
class="filter-blur"
data-pb-blur-amount="5">

PaintbrushJS works in any modern browser — which means IE 8 and below won’t see the effects.

For a full list of effects available, check out the documentation or head over to the demo page.

Tuesday, September 14, 2010

Adobe Developer Connection revamped

Adobe Developer Connection (ADC) website is now live on Day Communiqué 5.

For additional details on the ADC launch, see the informative blog by Adobe's Craig Goodman.

Wednesday, August 25, 2010

Common User-Agent Strings

Today's post is more of a note-to-self than anything else. I'm always trying to remember how the various browsers identify themselves to servers. The attendant user-agent strings are impossible to carry around in one's head, so I'm setting them down here for future reference.

To get the user-agent strings for five popular browsers (plus Acrobat), I created a script (an EcmaScript server page) in my Sling repository that contains the line:

<%= sling.getRequest().getHeader("User-agent") %>

This line simply spits back the user-agent string for the requesting browser (obviously). The results for the browsers I happen to have on my local machine are as follows:

Firefox 3.6:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6

Chrome 5.0.375:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4

IE7.0.6:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506)

Safari 5.0.1:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.17.8 (KHTML, like Gecko) Version/5.0.1 Safari/533.17.8

Opera 10.61:
Opera/9.80 (Windows NT 6.0; U; en) Presto/2.6.30 Version/10.61

Acrobat 9.0.0 Pro Extended:
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/523.15 (KHTML, like Gecko) Version/3.0 Safari/523.15

Interestingly, Acrobat seems to spoof a Safari signature.

If you want to perform this test yourself right now, using your present browser, simply aim your browser at http://whatsmyuseragent.com/, and you'll get a complete report on the full header sent by your browser to the server.

Tuesday, August 24, 2010

User-Agent Strings

This post has moved to: http://asserttrue.blogspot.com/2010/08/common-user-agent-strings_25.html. Please forgive the inconvenience.

Sunday, August 22, 2010

Looking back on the Aldus-Adobe deal

There's a terrific interview with Paul Brainerd about the history of Aldus Corporation over at computerhistory.org. In it, Brainerd comments on the why and how of Aldus's eventual acquisition by Adobe (a subject of considerable interest to me, since the company I work for -- Day Software -- has just been acquired by Adobe). Of the acquisition, Brainerd says:
At a 30,000 foot level, we had similar approaches to running a company. But at a working level, there were some very definite philosophical differences.

There was a definite difference in the customer orientation. We spent a lot more time talking to customers. Adobe's philosophy was more of an engineering-based one: if we make a great product, like PostScript, sooner or later people will want it.

But the reason I even considered Adobe was their underlying ethical standard of running a high-quality company that was fair to their customers and their employees. Unfortunately, that couldn't be said of all the companies in the industry.

A lot of thought went into the merger, and I think it was one of the best.
Hopefully, we'll all be saying much the same thing about the Day-Adobe deal years from now.

Friday, August 20, 2010

Day Software Developer Training: Days Two and Three

I made it through Days Two and Three of developer training here at Day's Boston office. Under the expert tutelage of Kathy Nelson, the eight of us in the class got a solid grounding in:
  • Apache Sling and how it carries out script resolution. (For this, we used my August 16 blog post as a handout.)
  • Modularizing components and allowing for their reuse.
  • Enabling various WCM content management tools such as CQ's Sidekick, which help web authors to create and edit web pages.
  • Creating a Designer to provide a consistent look and feel to a website, and using a common CSS file.
  • Creating a navigation component that to provide dynamic navigation to all pages as they are added or removed by authors.
  • Adding log messages to .jsp scripts, and using the CRXDE debugger.
That was Day Two. On Day Three we focused on:
  • Creating components to display a customizable page title, logo, breadcrumbs, and configurable paragraph.
  • Creating and adding a complex component (containing text and images), to implement bespoke functionality.
  • Adding a Search component. (We saw 3 different ways to do this.)
  • Internationalization, so that dialogs displayed to web authors can be displayed in one of the 7 languages supported out-of-the-box by Day Communiqué.
By the end of the third day, we had written hundreds of lines of JSP and manually created scores upon scores of custom nodes and properties in the repository.

Still to come: Creating and consuming custom OSGi bundles; workflow; and performance optimization tools.

I can't wait!

Wednesday, August 18, 2010

Day Software Developer Training: Day One

Yesterday, I made it through Day One of developer training at Day Software's Boston office. It was an interesting experience.

There are eight of us in the class. Interestingly, two of the eight enrollees have little or no Java experience (one is not a developer); most of the rest have varied J2EE backgrounds. All are (as you'd expect) relatively new Day customers. One is from an organization that is trying to migrate away from Serena Collage. The organization in question chose Day over Ektron partly on the basis of the flexibility afforded by Day's Java Content Repository architecture, which is relatively forgiving when it comes to making ad hoc changes to the content model over time. (We spent a fair amount of time discussing David Nüscheler's Seven Rules for Content Modeling.)

We spent much of the morning talking about architecture, standards, and the Day technology stack, which is built on OSGi, JCR (JSR-283), Apache Jackrabbit, and Apache Sling. Surprisingly (to me), OSGi was an unfamiliar topic to a number of people. The fact that bundles could be started and stopped without taking the server down was, for example, a new concept for some.

All of us were given USB memory sticks containing the Day Communiqué distribution (and a training license), and we were asked to install the product locally from the flash drive. A couple of people had trouble getting the product to launch (they received the dreaded "Server not ready, browser not launched" message). In one case, it was a firewall issue that was easily resolved. In another case, someone was using Java 1.3 (the product requires 1.5, minimum). A third person had trouble getting WebDAV to work on Windows 7. I noticed, in general, that the people with the fewest problems (all the way through the class) were using Macs.

We were shown how to access the CQ Servlet Engine administration console, the CRX launchpad UI and Content Explorer, and the Apache Felix (OSGi) console, as well as the CRXDE Lite integrated development environment -- a very nice browser-based IDE for doing repository administration and JSP development, among other tasks.

We were also shown how to (and in fact we did) set up author and publish instances of CQ on our local drives, and replicate content back and forth between them.

In the afternoon, we did a variety of hands-on exercises designed to show how to create and manipulate nodes and properties in the repository; how to create folder structures; how to create templates; and finally, how to create components and Pages. (At last, we got our hands dirty with JSPs.)

Some students had trouble getting used to the fact that in JCR, everything is either a node or a property. "Folders" in the repository, for example, are actually nodes of type nt:folder. If you use WebDAV to drag and drop a file into a folder, the file becomes a node of type nt:file and the content of the file is now under a jcr:content node with a jcr:data property holding the actual content. It requires a new way of thinking. But once you get the hang of it, it's not hard at all.

Day Two promises to be interesting as we take a closer look at Sling, URL decomposition and script resolution, and component hierarchies. Hopefully, we'll get even more JSP under our fingernails!

Monday, August 16, 2010

Understanding how script URLs are resolved in Sling

One of the things that gives Apache Sling a great deal of power and flexibility is the way it resolves script URLs. Consider a request for the URL

/content/corporate/jobs/developer.html

First, Sling will look in the repository for a file at exactly this location. If such a file is found, it will be streamed out as is. But if there is no file to be found Sling will look for a repository node located at:

/content/corporate/jobs/developer

(and will return 404 if no such node exists). If the node is found, Sling then looks for a special property on that node named "sling:resourceType," which (if present) determines the resource type for that node. Sling will look under /apps (then /lib) to find a script that applies to the resource type. Let's consider a very simple example. Suppose that the resource type for the above node is "hr/job." In that case, Sling will look for a script called /apps/hr/job/job.jsp or /apps/hr/job/job.esp. (The .esp extension is for ECMAScript server pages.) However, if such a script doesn't exist, Sling will then look for /apps/hr/job/GET.jsp (or .esp) to service the GET request. Sling will also count apps/hr/job/html.jsp (or .esp) as a match, if it finds it.

Where things get interesting is when selectors are used in the target path. In content-centric applications, the same content (the same JCR nodes, in Sling) must often be displayed in different variants (e.g., as a teaser view versus a detail view). This can be accomplished through extra name steps called "selectors." For example:

/content/corporate/jobs/developer.detail.html

In this case, .detail is a selector. Sling will look for a script at /apps/hr/job/job.detail.esp. But /apps/hr/job/job.detail.html.esp will also work.

It's possible to use multiple selectors in a resource URL. For example, consider:

/content/corporate/jobs/developer.print.a4.html

In this case, there are two selectors (.print and .a4) as well as a file extension (html). How does Sling know where to start looking for a matching script? Well, it turns out that if a file called a4.html.jsp exists under a path of /apps/hr/jobs/print/, it will be chosen before any other scripts that might match. If such a file doesn't exist but there happens to be a file, html.jsp, under /apps/hr/jobs/print/a4/, that file would be chosen next.

Assuming all of the following scripts exist in the proper locations, they would be accessed in the order of preference shown:

/apps/hr/jobs/print/a4.html.jsp
/apps/hr/jobs/print/a4/html.jsp
/apps/hr/jobs/print/a4.jsp
/apps/hr/jobs/print.html.jsp
/apps/hr/jobs/print.jsp
/apps/hr/jobs/html.jsp
/apps/hr/jobs/jobs.jsp
/apps/hr/jobs/GET.jsp
This precedence order is somewhat at odds with the example given in SLING-387. In particular, a script named print.a4.GET.html.jsp never gets chosen (nor does print.a4.html.jsp). Whether this is by design or constitutes a bug has yet to be determined. But in any case, the above precedence behavior has been verified.

For more information on Sling script resolution, be sure to consult the (excellent) Sling Cheat Sheet as well as Michael Marth's previous post on this topic. (Many thanks to Robin Bussell at Day Software for pointing out the correct script precedence order.)



Thursday, August 12, 2010

JSOP: An idea whose time has come

The w3c-dist-auth@w3.org list today received an interesting proposal for a new protocol, tentatively dubbed JSOP by its authors (David Nüscheler and Julian Reschke of Day Software). As the name hints, JSOP would be based on JSON and would be a RESTful protocol designed to facilitate the exchange of fine-grained information between browsers and (repository-based) server apps. As such, it's one of the first proposals (maybe the first?) to make extensive use of HTTP's new PATCH verb.

Why does the world need JSOP? "For the past number of years I always found myself in the situations where I wanted to exchange fine-grained information between a typical current browser and a server that persists the information," explains David Nüscheler. "In most cases for me the server obviously was a Content Repository, but I think the problem set is more general and applies to any web application that manages and displays data or information. It seemed that every developer would come up with an ad-hoc solution to that very same problem of reading or writing fine-grained data at a more granular level than a resource."

For example, what if you want to modify not just a resource but certain properties of the resource? WebDAV is often an answer in such situations (or you might be thinking AtomPub in the case of CMIS), but the fact is, it can take a lot of effort -- too much effort, some would say -- to achieve your goals using WebDAV, and in the end, HTML forms have no native understanding of property-based operations. As Nüscheler puts it, WebDAV and AtomPub "are not very browser-friendly, meaning that it takes a modern browser and a lot of patience with JavaScript to get to a point where one can interact with a server using either of the two."

So in other words, something as simple as setting or getting attributes on a folder shouldn't take a lot of hoop-jumping. You should be able to do things like:

Request:
GET /myfolder.json HTTP/1.1

Response:
{
"createdBy" : "uncled",
"name" : "myfolder",
"id" : "50d9317a-3a95-401a-9638-333a0dbf04bb"
"type" : "folder"
}

or:

Request:
GET /myfolder.4.json HTTP/1.1

Response:
{
"createdBy" : "uncled",
"name" : "myfolder",
"id" : "50d9317a-3a95-401a-9638-333a0dbf04bb"
"type" : "folder"
"child1" :
{
"grandchild11" :
{
"depth3" :
{
"depth4 : { ... }
}
}
}
}
In the above example (with nested folders), notice that the GET is on a URL of /myfolder.4.json. Notice the '.4.json', indicating that the server should return folders 4 levels deep.

Suppose you want to create a new document under /myfolder, delete an old document, move a doc, and update an attribute on the folder -- all in one operation. With JSOP, you could do something like:

PATCH /myfolder HTTP/1.1

+newdoc : { "type" : "document", "createdBy" : "me" }
-olddoc
>movingdoc : /otherfolder/mydocument
^lastModifiedBy : "me"

where + means to create a node/property/resource, - means delete, > means move, and ^ means update.

JSOP proposes not only to be JavaScript-friendly but forms-friendly. So for example, imagine that you want to upload a .gif image and update its metadata at the same time, using an HTML form. Under the Reschke/Nüscheler proposal, you could accomplish this with a form POST:

POST /myfolder/my.gif HTTP/1.1
Content-Type: multipart/form-data;
boundary=---------21447684891610979728262467120
Content-Length: 123
---------21447684891610979728262467120
Content-Disposition: form-data; name="data"
Content-Type: image/gif
GIF89a...................!.......,............s...f.;
---------21447684891610979728262467120
Content-Disposition: form-data; name="jsop:diff"
Content-Type: text/plain
^lastModifiedBy : "me"
+exif { cameraMake : "Apple", cameraModel : "Apple" }
---------21447684891610979728262467120--

Bottom line, JSOP promises to provide an easy, RESTful, forms-friendly, JavaScript-friendly way to do things that are possible (but not necessarily easy) right now with WebDAV or AtomPub. It should make working with repositories a snap for mere mortals who don't have time to master the vagaries of things like CMIS or WebDAV. In my opinion, it's a much-needed proposal. Here's hoping it becomes a full-fledged IETF RFC soon.

Tuesday, August 10, 2010

Skype heads for IPO of the century

Skype has made its filing with SEC, ahead of what will no doubt be the biggest IPO of the century. Interesting tidbits from the filing:
  • Skype's (top-line) run rate is $812 million per year
  • 28 percent of total Internet users have signed up with Skype (506 million people)
  • 40 percent of calls are video-chat
  • 6 percent of users pay
  • Adjusted EBITDA for the first half of 2010 was $115.7 million, up 54 percent from a year ago
  • The company has $85 million in cash
Add it all up and what do you get? Nothing less than the dial tone of the 21st century, I'd say.

Saturday, August 07, 2010

A "Smart Sobel" image filter


The original image ("Lena"), left, and the same image transformed via Smart Sobel (right).

Last time, I talked about how to implement Smart Blur. The latter gets its "smartness" from the fact that the blur effect is applied preferentially to less-noisy parts of the image. The same tactic can be used with other filter effects as well. Take the Sobel kernel, for example:

float [] kernel = {
2, 1, 0,
1, 0,-1,
0,-1,-2
};
Convolving an image with this kernel tends to produce an image in which edges (only) have been preserved, in rather harsh fashion, as seen here:


Ordinary Sobel transformation produces a rather harsh result.

This is an effect whose harshness begs to be tamed by the "smart" approach. With a "smart Sobel" filter, we would apply maximum Sobel effect to the least-noisy parts of the image and no Sobel filtering to the "busiest" parts of the image, and interpolate between the two extremes for other parts of the image.

That's easy to do with just some trivial modifications to the Smart Blur code I gave last time. Without further ado, here is the code for the Smart Sobel filter:

import java.awt.image.Kernel;
import java.awt.image.BufferedImage;
import java.awt.image.ConvolveOp;
import java.awt.Graphics;

public class SmartSobelFilter {

double SENSITIVITY = 21;
int REGION_SIZE = 5;

float [] kernelArray = {
2, 1, 0,
1, 0, -1,
0, -1,-2

};

Kernel kernel = new Kernel( 3,3, kernelArray );

float [] normalizeKernel( float [] ar ) {
int n = 0;
for (int i = 0; i < ar.length; i++)
n += ar[i];
for (int i = 0; i < ar.length; i++)
ar[i] /= n;

return ar;
}

public double lerp( double a,double b, double amt) {
return a + amt * ( b - a );
}

public double getLerpAmount( double a, double cutoff ) {

if ( a > cutoff )
return 1.0;

return a / cutoff;
}

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}

int [] getSample( BufferedImage image, int x, int y, int size ) {

int [] pixels = {};

try {
BufferedImage subimage = image.getSubimage( x,y, size, size );
pixels = subimage.getRGB( 0,0,size,size,null,0,size );
}
catch( Exception e ) {
// will arrive here if we requested
// pixels outside the image bounds
}
return pixels;
}

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}

int [] blurImage( BufferedImage image,
int [] orig, int [] blur, double sensitivity ) {

int newPixel = 0;
double amt = 0;
int size = REGION_SIZE;

for ( int i = 0; i < orig.length; i++ ) {
int w = image.getWidth();
int [] pix = getSample( image, i % w, i / w, size );
if ( pix.length == 0 )
continue;

amt = getLerpAmount ( rmsError( pix ), sensitivity );
newPixel = lerpPixel( blur[ i ], orig[ i ], amt );
orig[ i ] = newPixel;
}

return orig;
}

public void invert( int [] pixels ) {
for (int i = 0; i < pixels.length; i++)
pixels[i] = ~pixels[i];
}

public BufferedImage filter( BufferedImage image ) {

ConvolveOp convolver = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);

// clone image into target
BufferedImage target = new BufferedImage(image.getWidth(), image
.getHeight(), image.getType());
Graphics g = target.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();

int w = target.getWidth();
int h = target.getHeight();

// get source pixels
int [] pixels = image.getRGB(0, 0, w, h, null, 0, w);

// blur the cloned image
target = convolver.filter(target, image);

// get the blurred pixels
int [] blurryPixels = target.getRGB(0, 0, w, h, null, 0, w);
invert( blurryPixels );

// go thru the image and interpolate values
pixels = blurImage(image, pixels, blurryPixels, SENSITIVITY);

// replace original pixels with new ones
image.setRGB(0, 0, w, h, pixels, 0, w);
return image;
}
}
To use the filter, instantiate it and then call the filter() method, passing a java.awt.image.BufferedImage. The method returns a transformed BufferedImage.

There are two knobs to tweak: SENSITIVITY and REGION_SIZE. The former affects how much interpolation happens between native pixels and transformed pixels; a larger value means a more extreme Sobel effect. The latter is the size of the "neighboring region" that will be analyzed for noisiness as we step through the image pixel by pixel. This parameter affects how "blocky" the final image looks.

Ideas for further development:
  • Develop a "Smart Sharpen" filter
  • Combine with a displacement filter for paintbrush effects
  • Overlay (combine) the same image with copies of itself, transformed with various values for SENSITIVITY and REGION_SIZE, to reduce "blockiness"

Tuesday, August 03, 2010

Implementing Smart Blur in Java


Original image. Click to enlarge.


Image with Smart Blur applied. Notice that outlines are
preserved, even where the oranges overlap.


One of my favorite Photoshop effects is Smart Blur, which provides a seemingly effortless way to smooth out JPEG artifacts, remove blemishes from skin in photographs of people, etc. Its utility lies in the fact that despite the considerable blurriness it imparts to many regions of an image, it preserves outlines and fine details (the more important parts of an image, usually). Thus it gives the effect of magically blurring only those parts of the image that you want to be blurred.

The key to how Smart Blur works is that it preferentially blurs parts of an image that are sparse in detail (rich in low-frequency information) while leaving untouched the parts of the image that are comparatively rich in detail (rich in high-frequency information). Abrupt transitions in tone are ignored; areas of subtle change are smoothed (and thus made even more subtle).

The algorithm is quite straightforward:

1. March through the image pixel by pixel.
2. For each pixel, analyze an adjacent region (say, the adjoining 5 pixel by 5 pixel square).
3. Calculate some metric of pixel variance for that region.
4. Compare the variance to some predetermined threshold value.
5. If the variance exceeds the threshold, do nothing.
6. If the variance is less than the threshold, apply blurring to the source pixel. But vary the amount of blurring according to the variance: low variance, more blurring (high variance, less blurring).

In the implementation presented below, I start by cloning the current image and massively blurring the entire (cloned) image. Then I march through the pixels of the original image and begin doing the region-by-region analysis. When I need to apply blurring, I derive the new pixel by linear interpolation between original and cloned-image pixels.

So the first thing we need is a routine for linear interpolation between two values; and a corresponding routine for linear interpolation between two pixel values.

Linear interpolation is easy:

public double lerp( double a, double b, double amt) {
return a + amt * ( b - a );
}

Linear interpolation between pixels is tedious-looking but straightforward:

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}
Another essential routine that we need is a routine for analyzing the pixel variance in a region. For this, I use a root-mean-square error:

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}
Before we transform the image, we should have code that opens an image and displays it in a JFrame. The following code does that. It takes the image whose path is supplied in a command-line argument, opens it, and displays it in a JComponent inside a JFrame:

import java.awt.Graphics;
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import javax.swing.JComponent;
import javax.swing.JFrame;

public class ImageWindow {

// This inner class is our canvas.
// We draw the image on it.
class ImagePanel extends JComponent {

BufferedImage theImage = null;

ImagePanel( BufferedImage image ) {
super();
theImage = image;
}

public BufferedImage getImage( ) {
return theImage;
}

public void setImage( BufferedImage image ) {
theImage = image;
this.updatePanel();
}

public void updatePanel() {

invalidate();
getParent().doLayout();
repaint();
}

public void paintComponent( Graphics g ) {

int w = theImage.getWidth( );
int h = theImage.getHeight( );

g.drawImage( theImage, 0,0, w,h, this );
}
} // end ImagePanel inner class

// Constructor
public ImageWindow( String [] args ) {

// open image
BufferedImage image = openImageFile( args[0] );

// create a panel for it
ImagePanel theImagePanel = new ImagePanel( image );

// display the panel in a JFrame
createWindowForPanel( theImagePanel, args[0] );

// filter the image
filterImage( theImagePanel );
}

public void filterImage( ImagePanel panel ) {

SmartBlurFilter filter = new SmartBlurFilter( );

BufferedImage newImage = filter.filter( panel.getImage( ) );

panel.setImage( newImage );
}

public void createWindowForPanel( ImagePanel theImagePanel, String name ) {

BufferedImage image = theImagePanel.getImage();
JFrame mainFrame = new JFrame();
mainFrame.setTitle( name );
mainFrame.setBounds(50,80,image.getWidth( )+10, image.getHeight( )+10);
mainFrame.setDefaultCloseOperation(3);
mainFrame.getContentPane().add( theImagePanel );
mainFrame.setVisible(true);
}

BufferedImage openImageFile( String fname ) {

BufferedImage img = null;

try {
File f = new File( fname );
if ( f.exists( ) )
img = ImageIO.read(f);
}
catch (Exception e) {
e.printStackTrace();
}

return img;
}

public static void main( String[] args ) {

new ImageWindow( args );
}
}


Note the method filterImage(), where we instantiate a SmartBlurFilter. Without further ado, here's the full code for SmartBlurFilter:
import java.awt.image.Kernel;
import java.awt.image.BufferedImage;
import java.awt.image.ConvolveOp;
import java.awt.Graphics;

public class SmartBlurFilter {

double SENSITIVITY = 10;
int REGION_SIZE = 5;

float [] kernelArray = {
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1
};

Kernel kernel = new Kernel( 9,9, normalizeKernel( kernelArray ) );

float [] normalizeKernel( float [] ar ) {
int n = 0;
for (int i = 0; i < ar.length; i++)
n += ar[i];
for (int i = 0; i < ar.length; i++)
ar[i] /= n;

return ar;
}

public double lerp( double a,double b, double amt) {
return a + amt * ( b - a );
}

public double getLerpAmount( double a, double cutoff ) {

if ( a > cutoff )
return 1.0;

return a / cutoff;
}

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}

int [] getSample( BufferedImage image, int x, int y, int size ) {

int [] pixels = {};

try {
BufferedImage subimage = image.getSubimage( x,y, size, size );
pixels = subimage.getRGB( 0,0,size,size,null,0,size );
}
catch( Exception e ) {
// will arrive here if we requested
// pixels outside the image bounds
}
return pixels;
}

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}

int [] blurImage( BufferedImage image,
int [] orig, int [] blur, double sensitivity ) {

int newPixel = 0;
double amt = 0;
int size = REGION_SIZE;

for ( int i = 0; i < orig.length; i++ ) {
int w = image.getWidth();
int [] pix = getSample( image, i % w, i / w, size );
if ( pix.length == 0 )
continue;

amt = getLerpAmount ( rmsError( pix ), sensitivity );
newPixel = lerpPixel( blur[ i ], orig[ i ], amt );
orig[ i ] = newPixel;
}

return orig;
}


public BufferedImage filter( BufferedImage image ) {

ConvolveOp convolver = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);

// clone image into target
BufferedImage target = new BufferedImage(image.getWidth(), image
.getHeight(), image.getType());
Graphics g = target.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();

int w = target.getWidth();
int h = target.getHeight();

// get source pixels
int [] pixels = image.getRGB(0, 0, w, h, null, 0, w);

// blur the cloned image
target = convolver.filter(target, image);

// get the blurred pixels
int [] blurryPixels = target.getRGB(0, 0, w, h, null, 0, w);

// go thru the image and interpolate values
pixels = blurImage(image, pixels, blurryPixels, SENSITIVITY);

// replace original pixels with new ones
image.setRGB(0, 0, w, h, pixels, 0, w);
return image;
}
}
Despite all the intensive image analysis, the routine is fairly fast: On my machine, it takes about one second to process a 640x480 image. That's slower than Photoshop by a factor of five, or more, but still not bad (given that it's "only Java").

Ideas for further development:
  • Substitute a directional blur for the non-directional blur.
  • Substitute a Sobel kernel for the blur kernel.
  • Try other sorts of kernels as well.

Sunday, August 01, 2010

An image histogram in 30 lines of code



The source image ("Lena") at left.


Its pixel-distribution histogram.

According to Wikipedia, "An image histogram is a type of histogram which acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance."

It occurred to me that it shouldn't be that hard to get Google Charts to produce an image histogram, with just a few lines of code. And that turns out to be true. Around 30 lines of server-side JavaScript will do the trick.

If you have JDK 6, run the command "jrunscript" in the console (or find jrunscript.exe in your JDK's /bin folder and run it). Then you can cut and paste the following lines into the console and execute them in real time. (Alternatively, download js.jar from the Mozilla Rhino project, and run "java -cp js.jar org.mozilla.javascript.tools.shell.Main" in the console.)

The first order of business is to open and display an image in a JFrame. The following 9 lines of JavaScript will accomplish this:

imageURL = "http://wcours.gel.ulaval.ca/2009/a/GIF4101/default/8fichiers/lena.png";
IO = Packages.javax.imageio.ImageIO;
image = IO.read( new java.net.URL(imageURL) );
frame = new Packages.javax.swing.JFrame();
frame.setBounds(50,80,image.getWidth( )+10,
    image.getHeight( )+10);
frame.setVisible(true);
pane = frame.getContentPane();
graphics = pane.getGraphics();
graphics.drawImage( image,0,0,null );

The next order of business is to set up a histogram table, loop over all pixel values in the image, tally the pixel counts, and form the data into a URL that Google Charts can use:

function getMaxValue( array ) {

for (var i = 0,max = 0; i < array.length; i++ )
    max = array[ i ] > max ? array[ i ] : max;

return max;
}

// get pixels
width = image.getWidth();
height = image.getHeight();
pixels = image.getRGB( 0,0, width, height, null, 0, width );

// initialize the histogram table
table = (new Array(257)).join('0').split('');

// populate the table
for ( var i = 0; i < pixels.length; i++ )
    table[ ( pixels[ i ] >> 8 ) & 255 ]++;

maxValue = getMaxValue( table );

data = new Array();

for ( var i = 0; i < table.length; i++ )
    data.push( Math.floor( 100 * table[ i ] / maxValue ) );

data = data.join(",");

url = "http://chart.apis.google.com/chart?chxt=y&chbh=a,0,0&chs=512x490&cht=bvg&chco=029040&chtt=histogram&chd=t:"

// call Google Charts
image = IO.read( new java.net.URL( url + data ) );

// draw the resulting image
graphics.drawImage( image,0,0,null );

Note that we actually tally only the green pixel values. (But these are the most representative of tonal values in an RGB image, generally.) Table values are normalized against maxValue, then multiplied by 100 to result in a number in the range 0..100. Google obligingly plots the data exactly as shown in the above graphic.

And that's about all there is to say, except: Why can't all graphics operations be this easy? :)

Saturday, July 24, 2010

Compiled languages are too complex

In a talk Thursday at the O'Reilly Open Source Conference, Google distinguished engineer Rob Pike blasted C++ and Java for being overly verbose and too complex.

"I think these languages are too hard to use, too subtle, too intricate," Pike averred. "They're far too verbose and their subtlety, intricacy and verbosity seem to be increasing over time. They're oversold, and used far too broadly."

I tend to agree. Where else but in a language like C would you ever come up with something like:

(*((*(srcPixMap))->pmTable))->ctSeed =
(*((*((*aGDevice)->gdPMap))->pmTable))->ctSeed;
This monstrous line of code is one I used very often in my days of graphics programming on the Mac (circa 1996). On the Mac, the all-important CopyBits() routine always examines the ctSeed field of the source and destination color tables to see if they differ. If the two seed values are not the same, QuickDraw will waste time translating color table info, which you don't want (if you're interested in performance). Hence, you use this line of code to coerce the ctSeed field of the source and destination color tables to the same value. I wrote about this and other tricks for speeding up graphics on the Mac in a 1999 MacTech article.

Of course, the answer to Pike's Complaint is to use dynamic languages like JavaScript or Ruby instead of C++ or Java. But that's not always possible (as when trying to do high-performance graphics programming).

Still, it's surprising how much you can do in JavaScript these days. At the USENIX annual conference last month, Google engineer Adam de Boor raised an eyebrow or two in the audience when he pointed out that Google's Gmail service (443,000 lines of code) is written entirely in JavaScript.

Pike and others at Google are promoting the Go language as a solution to the compiled-language complexity problem.

Go figure.

Sunday, July 18, 2010

Learning about ESP pages in Sling

Lately I've been doing a fair amount of server-side scripting using ESP (ECMAScript Pages) in Sling. At first blush, such pages tend to look a lot like Java Server Pages, since they usually contain a lot of scriptlet markup, like:

<%
// script code here
%>

and

<%=
// stuff to be evaluated here
%>

So it's tempting to think ESP pages are simply some different flavor of JSP. But they're not. From what I can tell, ESP pages are just server pages that get handed to an EspReader before being served out. The EspReader, in turn, handles the interpretation of scriptlet tags and expression tags (but doesn't compile anything into a servlet). Bottom line, ESP is not JSP, and despite the availability of scriptlets tags, things work quite a bit differently in each case.

Suppose you want to detect, from an ESP page or a JSP page, what kind of browser a given page request came from. In a Sling JSP page you could do:

<%@taglib prefix="sling" uri="http://sling.apache.org/taglibs/sling/1.0" %><%

%><sling:defineObjects/>

<html><body>

<%

java.util.Enumeration c = request.getHeaders("User-Agent");

String s = "";

while ( c.hasMoreElements() )

s += c.nextElement();

%>


<%= s %>

</body></html>

But what do you do in ESP? Remember, <sling:defineObjects/> is not available in ESP.

It turns out that Sling automatically (without the need for any directives) exposes certain globals to the JavaScript Context at runtime, and one of them is a request object. Thus, in ESP you'd simply do:



<%

c=request.getHeaders("User-Agent");

s = "";

while ( c.hasMoreElements() )

s += c.nextElement();

%>

<%= s %>


Very similar to the JSP version.

So the next question I had was, what are the other globals that are exported into the JavaScript runtime scope by Sling? From what I can determine, the Sling globals available in ESP are:

currentNode
currentSession
log
out
reader
request
resource
response
sling


currentNode is the JCR node underlying the current resource; currentSession is what it sounds like, a reference to the current Session object; log refers to the org.slf4j.Logger; reader, I'm not sure about (is it a reference to the EspReader?); request is a reference to the SlingHttpServletRequest; resource is the current Resource; response is, of course, a reference to the SlingHttpServletResponse; and sling is a SlingScriptHelper. All of these are available all the time, throughout the life of any ESP script in Sling.

For more information, try the Sling Javadocs here or Day's page of resources here (note, in particular, the list of References on the right).