< RESTful Web APIs Monkeypatch
Smooth Unicode >

[Comments] (2) LCODC$SSU: At RESTfest last week I put on an old Mozilla shirt and my Al Gore campaign button and gave a talk from the year 2000: "LCODC$SSU and the coming automated web". I'll link to video when it goes up on Vimeo, and I'll also point to my five-minute talk about ALPS, which not only took five minutes to deliver, it took five minutes to put together.

But right now, there's some more stuff I want to say about "LCODC$SSU", and some stuff I couldn't say in the talk due to the framing device.

When I first mentioned this talk to Mike Amundsen, he told me about Bret Victor's talk from 1974, "The Future of Programming", which Victor gave in July and which had a similar conceit. Victor is also a much better actor than I am, but I went ahead with my talk because wanted to do something different with "LCODC$SSU" than happens in "The Future of Programming". I get a strong "You maniacs! You blew it up!" vibe from Victor's talk. And there's some of that at the end of "LCODC$SSU"—I really feel like we've spent thirteen years making five years worth of progress, as you can see from my frustration at the beginning of "How to Follow Instructions"—but I also wanted to do some new things in my talk.

While writing Appendix C of RESTful Web APIs I came to appreciate the Fielding dissertation as a record of the process used to solve an enormous engineering problem. Comments from RESTFest attendees confirm that seeing it this way helps folks grasp the dissertation's gem: the definition of LCODC$SSU (a.k.a. REST). Thinking about it this way doesn't require a historical-fiction framing device (Appendix C has no such framing device), but it does require you stop treating the Fielding dissertation as a prescient guide to the 21st century and see it as a historical record of the 1990s.

And once you do that, the missing stair we've been jumping over or falling through for thirteen years becomes visible. The Web works because it has four domain requirements that reinforce each other: low entry-barrier, distributed hypermedia, extensibility, and Internet scale. But there's also a fifth implicit requirement: the presence of a slow, expensive human being operating the client and making the final call on every single state transition. In the talk I identified the inverse of this implicit requirement as an explicit requirement: "machine legibility". In RESTful Web APIs we use the term "semantic gap" to describe what happens when you remove the implicit requirement.

Making the human unnecessary on a transition-by-transition basis (the goal of "Web APIs" as a field) is a really difficult problem, and it's partly because of the phenomenon I describe in the talk and in RWA Appendix C. Getting rid of the human raises the entry-barrier dramatically. Looking around for a cheap way to lower the entry-barrier, we decide to get rid of distributed hypermedia. But distributed hypermedia is the only thing that allows Internet-scale and extensibility to coexist! We must lose one or the other. We end up with an increasingly ugly system that can never be changed, or else a fascist dystopia.

And here's the bit I couldn't put in the talk because it would break the framing device. We've seen a decade-long obsession with lowering entry-barrier at any cost, and although the cost has been enormous I can't really say the obsession is misplaced. Low entry-barrier is the main reason why the Web succeeded over all other hypermedia systems. Low entry-barrier drives adoption. You get adoption first and you deal with the other problems (which will be enormous) down the road.

Well, we're down the road. The bills are coming due. If we want this to go more smoothly next time, we need to stop chasing entry-barrier local minima and come up with a better solution. We need to make change easier so we can make progress faster.

The "machine legibility" problem will still be very difficult, and frankly I can't see a way to a complete solution. But there's cause for optimism: every step forward we've taken so far has illuminated the space a little more and made the next step visible.

It's always been this way. That's how hypermedia works. That's why I called my now-infamous 2008 QCon talk "Justice Will Take Us Millions Of Intricate Moves" (after William Stafford), and that's why I take my motto from a Johnny Cash song that's probably not on most peoples' list of inspirational Johnny Cash songs.

I built it one piece at a time.

Filed under:

Comments:

Posted by Zack at Fri Sep 27 2013 20:29

I have a complaint, an anecdote, and a topic on which I earnestly want your opinion.

With a talk title like "LCODC$SSU" there should have been at least one VMS joke. Not that I have any idea what an appropriate VMS joke would be, but. Still.

The George Jetson epilogue to "Justice Will Take Us..." hits rather close to home for me right now, because I've been doing a ton of automation engineering for this experiment I'm working on and you know what's surprisingly hard? Getting throwaway virtual machines (such as you need for EC2 spot instances) to start doing something useful when you boot them up. The George Jetson part of the task, I am still doing by hand.

Finally, serious question about REST, or rather HATEOAS: One of the several related security problems I'm looking at for my thesis is: web applications expose state transitions to the network, and in some cases that means an eavesdropper can learn things they shouldn't. For instance, a tax-preparation website asks you a bunch of questions many of which depend on your income and personal circumstances, so someone can run through the state machine over and over again observing the sizes of the HTTP responses for each possible choice, and then snoop on someone's network connection and figure out their AGI and suchlike. (For more detail see this paper.) This is obviously less of an issue if more of the application logic is transferred to the client all at once, and that doesn't have to mean there is no API ... but it does make it a lot less obvious how to structure the API. So I'm wondering if you have any thoughts on how one might get the "machine legibility" benefits of HATEOAS without also requiring each and every state transition to correspond to a HTTP query and response.

Posted by Leonard at Sun Sep 29 2013 15:18

I skimmed the paper and I don't have much to add, but I do have something.

I noticed that the problem gets worse as the state transitions get smaller, with HTTP requests devoted to single typed characters and answers to individual questions. At the same time, as the transitions get smaller, it becomes easier to hide which transition is happening, because the standard deviation of a set is smaller if all numbers in the set are small.

HTTP 2.0 might open up some extra camouflage possibilities. Multiple streams can be open at once, so it should be more difficult to be certain how big an individual response was. But most clients (automated or human-driven) perform state transitions in series, not in parallel, so I don't think that will be too useful on its own.

I would say that by definition every HTTP request and response corresponds to a state transition. I would also say this problem is orthogonal to machine legibility--it looks like perfectly ordinary human-readable websites have this problem.


[Main] [Edit]

Unless otherwise noted, all content licensed by Leonard Richardson
under a Creative Commons License.