Information is Energy, HTTP is Better, War is Peace, Freedom is Slavery

The whole thing started in my final year of graduate work, before I realized I hated computer science. At this point, I still had not decided whether to follow the software engineering or the artificial intelligence track, so I was taking classes in both. One of the requirements was that I attend seminar classes; unfortunately the only seminar class being offered that year was an in-depth examination of peer-to-peer applications.

First off, I am not a fan of this popular distinction between peer-to-peer and other network applications. As far as I’m concerned, SMTP is the ultimate example of a peer-to-peer application. It is just that, like in Orwell’s Animal Farm, all peers are equal but some peers are more equal than others.

The second problem was the professor, a self-acknowledged expert in his field and a man for whom I held an intense dislike. He was extremely focused on text-based protocols — HTTP, XML, SOAP — to the exclusion of all others. God forbid you even mention XDR or any other binary protocol. This obsession comes from his former graduate student, who was instrumental in refining the HTTP/1.1 protocol. Vicarious fame, you understand.

His startup had recently abandoned its workflow management product (and with good reason) in favor of the burgeoning peer-to-peer market popularized by the trials and tribulations of the late great Napster. The company’s product used a stripped-down version of Apache as an event listener on each platform; a novel approach with which I find no objections for enterprise use. He had a need to justify the startup’s continued existence, however; how better to do so than to tap into the collective mind of Irvine’s best?

The class was, as expected, dry and dull. Lots of PowerPoint, some discussion, but the key emphasis was on technology not policy. (Which is a shame considering that UCI was the home of CORPS, a discipline relating computer science to social policy. If there was any computer technology that has had significant social impact this decade, it was peer-to-peer.)

The class as a whole finally agreed on a set of topics, and my partner and I selected coordination of fire crews as our problem. We came up with a rather creative approach using undirected unreliable broadcasting, as discussed in our original paper. (Available upon request — I don’t want to provide a link for search engines to scan. The less I legitimize this mess the better, for reasons soon to be made clear.)

The approach followed was to have each fire crew transmit information to each other, rather than to a central coordinating facility. Each fire crew would carry a lightweight PDA that would have enough computing power to coordinate between crews so that no fire was neglected because each crew was directed to the nearest unattended fire.

The key to understanding how this approach worked is to understand that each PDA had a model of the world that was, by the nature of the network, incomplete and out-of-date; but for the purposes of directing work effort was good enough.

Reread that last sentence carefully.

One of the problems with computer scientists I’ve noted is that they really, really abhor imperfect solutions, even if they work — especially if they work. This lead to the great AI debacle of the last three decades, in which imprecise but usable Bayesian solutions were discarded in favor of absolute but computationally inefficient nonmonotonic deductive systems. (We’re getting better.)

The idea of a functional, decentralized coordinating system that ignored the unreliability of unidirectional broadcasting was an anathema to the hack programmers in our class, many of whom wouldn’t recognize an IP packet if it bit them on the ass. It was especially odious to the professor, because of his investment — both professionally and monetary — in the aforementioned peer-to-peer company. It was also problematic for my partner, not for any technical reason but for the very political reason that said professor was his thesis advisor.

To quell these concerns, I wrote an appendix to the paper with an outline of a mathematical proof that the messages would be delivered with a probability dependent fully on the strength of the radio links between each fire crew. I had Professor Smyth (one of the few professors there that I respected) double-check my analysis and confirm my conclusions.

So, we turned in the paper, presented a software simulation in class, and received a good grade. End of story.

I wish.

It turned out that our project was the only decent one in the class, according to the professor. (More likely it was the only project presented by a student that could easily be intimidated.) He wanted us to rewrite and expand the paper so that it may be presented at an upcoming software engineering conference.

My partner tried to convince me to write the paper, but I opposed because of my extreme dislike for the professor (which has only increased because of this seminar) and because of my excessive courseload that quarter. He decided to go on his own, with the promise that my name will remain as one of the original authors of the paper.

Fine and dandy. Fast-forward two months, and I am under a heavy courseload with multiple presentations looming. My partner then asks if I would mind if my name was moved to the third author’s position on the paper, underneath the professor’s. An odd request, since the final position is usually reserved for the advisor, but it did not bother me and I gave my permission. (My human interfaces professor later said I was a fool for allowing that to happen, but greater indignities were yet to come.)

Finals week arrives, and while I have no tests I have three term papers due. My partner asked for one final favor: would I review the paper before it is submitted for critique? I agreed.

The new paper is, in my eyes, an unparalleled disaster. Gone are my elegant arguments for simplicity and robustness, missing are my detailed analyses and proofs. The paper has been reduced to a twelve-page advertisement for the HTTP-based middleware package produced by the professor’s startup.

HTTP. On a PDA. In an environment where range and battery life are the most important factors. The whole idea of “bits per joule” (or generally “information versus energy expenditure”) never crossed their minds; there was no analysis of the power requirements of this system, nor any consideration placed on the difficulty of setting up a connection over a radio link. I tried later to explain this to Dan, who kept spouting premade solutions like BGP, dynamic routing tables, and mesh/cell models.

“Dan,” I said, “this algo doesn’t care if every message gets through, only if the occasional message gets through.”

“Then what you basically need is UDP for radio.”

Dan got it.

The professor didn’t, however, and he and his servant went off to the conference and received the accolades of other software engineers who were too busy kowtowing to see that the emperor was buck naked. Or maybe they had the courage to laugh him off the podium, who knows. I certainly don’t; I washed my hands of the entire affair, demanded my name be removed from the paper, and decided to abandon the software engineering track in favor of AI. In hindsight it was a bad move, but I’d rather fail at something challenging than succeed at something insignificant.

I used to blame this on the need to find the perfect problem for a dubious commercial solution, but that is unkind to all involved. I believe now that they switched to a connection-oriented model to insure that all of the databases were kept synchronized, even if the stability of the algorithm did not require this to be true. Perhaps they were hoping for more accurate local models and were expecting more efficient allocation of resources as a result.

But at what cost? The original model had application-level payloads of eight bytes. Eight bytes will barely get you a HTTP/0.9 request, and a static one at that. It would be like trying to send a quick postcard home but being required to type it out and place it in a large cardboard box before shipping. HTTP is fine for fetching documents, and might even work for REST applications, but for lightweight, power-sensitive work it is certainly overkill.

There certainly is a misconception, a general phobia, about unreliable communications in the peer-to-peer middleware world, which means that all messages eventually get through even if they are out-of-date and worthless by the time they arrive. Is it no wonder that most network games — sensitive to performance — use two channels, one reliable for critical information and one unreliable for everything else? Alas, I fear that if the middleware people were to discover this customer need, they simply would add a new class of reliable messages for nugatory events, not realizing that this extra effort is more than pointless.

As Voltaire said, “Better is the Enemy of Good.”