<D <M <Y
Y> M> D>

[Comments] (2) Request Weblog #Frog.Frog: OK, now on to the specific weblog entry on which Rachel asked me for my thoughts. These are mostly based on the work I've done designing and building services since RWS was published.

"Has REST Been Fortunate in Its Enemies?" I really hope the whole "enemies" thing goes away and that the period 2000-2005 is seen in retrospect as a big misunderstanding. Just in the period since RWS was published there have been really interesting developments (OAuth, PATCH) that have advanced the state of the HTTP art, and we lost five years of that kind of thing going down the wrong path and then having a huge argument about which path was right. That said, when your enemies turn out to be big corporations with lots of money (as always seems to happen to me), I would not describe the situation as fortunate.

Schema-driven mapping: probably hopeless if you want a really detailed mapping to plug-and-chug into your strongly-typed language. Even if possible, very boring. Not too difficult if you just want an index that points out the interesting parts of a document.

Contracts: My personal crackpot theory that I can't get anyone to believe is that hypermedia is contracts. A link or form is a promise that if you make a certain HTTP request, you'll get a certain result. HTML is a primitive contract language. The AtomPub service document format and WADL are more advanced contract languages. That said, I think contracts are less useful than often supposed, because the ELIZA effect artificially narrows the distance between the syntax of the contract document and its semantics. An AtomPub service document works like magic because a human programmer did some work beforehand understanding the AtomPub standard and programming in the semantics.

Registries: We have registries on the human web; we should be able to have them for web services. How much of their imagined benefit is due to the ELIZA effect? How much work will a human need to put in to find the one of 10000 AtomPub implementations they want to post to? I don't know. It depends on the conventions and standards we come up with.

Payload wrappers: HTTP is a sufficient payload wrapper for my needs, and if I ran into a problem with it I'd extend it (a la OAuth), not that I feel competent to do that. I can kind of see how you'd consider Atom a payload wrapper, but I'm just a simple caveman and I prefer to think of it as a representation format.

Message-level signing and security: yes! to the former. OAuth works for me. I don't know if I'd ever use message-level security. In general, experimentation is good when it happens on top of HTTP where everyone can use the new standards.

"Is Getting HTTP Right Good Enough?" No, not in the way we mean "HTTP" when we have this never-ending argument about verbs. I don't even think it's the first step. The first step is to expose resources: to give a distinct URI to every object you want to publish. That's the first of the RESTful rehabilitation suggestions on page 303 of RWS. That's the first three steps of the ROA design procedure on page 148. The first document to understand is not the Fielding thesis, not RFC 2616, not even RWS, it's Architecure of the World Wide Web, Volume One. That document is mostly about URIs. HTTP shows up as a URI scheme and as a popular protocol for dereferencing URIs.

This is why Flickr and del.icio.us put up terribly-designed web services and people love them and use them all the time: they stick URIs on everything. People love URIs! Their problem with those services is they don't know when to stop. So the first step is to get on the web, to not ignore the intuitively obvious usefulness of URIs. It's terrible that these web services self-identify as "RESTful", but at least they've taken the first step.

The second step is to know when to stop handing out URIs. Picking a vocabulary lets you move some of the information out of the URI and into the HTTP method. It lets you expose objects that have methods, instead of a big C library of del.icio.us or Flickr-style functions. This is the step where you master HTTP. This is where the arguments are happening right now. These arguments are not based on intuitively obvious things. They're arguments about scalability and fiasco avoidance and client and network capabilities. They're also arguments about whether using all of HTTP would improve the Web.

The third step is the mastery of hypermedia. (Note that there's one step for each of the Web technologies: URI, HTTP, HTML.) This is about making it possible to program a computer to do the mapping of options to actions that happens in our brains when we surf the web. The advantage here is the same as in the other steps: loose coupling and flexibility. This is the part that confuses everybody, because we don't usually write programs like this unless we're scripting or screen-scraping. And the ELIZA effect is in play, because we've moved past HTTP requests and responses to what happens inside us when we look at response N and decide to make request N+1.

To the extent that there is a "religious" aspect to RESTful thinking, I think it's on this level, in the intuition that there's something in the high-level way we use the Web that we can use when programing a client. Maybe it will turn out to be a bust like AI in the 80s. I've had some encouraging results on my current project. I think it's likely that we're only now figuring this out because we spent five years arguing over whether to use URIs and then another three arguing over whether to use GET/POST or GET/POST/PUT/DELETE.

'What Does "Hypermedia as the Engine of Application State" Mean, Really?': It means that you operate the web service by following links and filling out forms. (For definitions of "link" and "form" that depend on the media type.) It means you can use a generic "web service browser" that doesn't have a bunch of hard-coded knowledge about one particular web service, the way PyAmazon does. When the web service changes, user behavior may need to change with it, but the client itself isn't instantly rendered useless, the way PyAmazon was. I called this "connectedness" in RWS, and Roy Fielding himself slammed me for it, but when Tim Bray, after years of real-world experience, isn't sure what "hypermedia as the engine of application state" means, I think that's a sign it needs to be explained in different words.

Saying that RESTful services "work like the web" is not just saying that things have URIs or you can cache GET requests. Ideas as confusing as hypermedia-as-the-engine-of-application-state become understandable when you think about the way you use the web.

If a website is well-designed, you don't need to mess around with the URL bar to get where you want to be: you fill out forms and follow links. You don't need a custom web browser to use crummy.com. My website is distinguished from all other websites by the HTML documents I serve. If I change my weblog software, it'll serve you different documents, and you may have to change the way you use the website, but your browser doesn't crash.

You know what to do (how to drive the crummy.com "application" into the desired state) because the document I send you lays out your options. And because the document represents the options as hypermedia links and forms, you know how to build an HTTP request that will carry out your desires.

: That one deserved its own entry. Onwards! (Again, here's the document I'm responding to.)

Is Statelessness Required? The universe will not end in an ontological segfault if you violate statelessness. Instead, you will give up a lot of scalability potential, and you will hide state where your clients can't get at it, which will annoy them. So why do it? Maybe because you're using a framework that makes it really easy to write programs that violate statelessness, and then tries complicated tricks to get scalability back, because that's the way web frameworks have been written for ten years.

We took a pretty hard anti-cookie line in RWS (pages 252-253), but we made it clear that cookies themselves are not the problem. Cookies, like links, are a way of serving application state. They have one big RESTfulness problem but it's moot because nobody follows the cookie standard that slavishly. The real problem is the session IDs that go into those cookies.

A session ID is a key into a big chunk of state kept invisibly on the server. That's what causes the problems. It's just as wrong to serve that session ID as a query variable in a link as it is to serve it in a cookie. You need to turn the hidden state into application state by incorporating it into links and forms, or you need to turn it into resource state by exposing it as part of your web service and letting clients manipulate it directly.

Is Being Message-Centric Good Enough? I say yes. I'm young enough that the Web is the first distributed programming environment I ever used. I've never felt like I was missing something that justified switching to a competitor. The more I've learned about its design the more impressed I've been. When something better comes along, I predict it will look more like the Web than it will look like DCOM or CORBA or WS-*, whether it's "better" in a global sense or in some application-limited sense.

Are PUT and DELETE Essential? The universe will not end in an ontological segfault etc. Also, you won't even stop being RESTful, assuming you have proper resource design, as per my first entry in this series. What you'll give up is the ability to make various reliability guarantees. I was going to say you also give up the ability to avoid the lost-update problem, but I guess you could do that with POST, so long as (yes) you were POSTing to the same URI you sent GET to earlier.

I don't know how good an argument this is anymore, but: designing with PUT and DELETE forces you to think in terms of resources. It's a form of discipline, like eschewing cookies. When you've got "read data" and "whatever!" it's very easy to think in terms of operations, flex your ten-year-old web application muscles, and end up with a mess like the Flickr web service. The vocabulary is negotiable. The underlying idea, (that a URI designates an object which responds to a standard vocabulary) is essential. That's the Architecture of the World Wide Web.

Is "Do Like the Web" a Good Argument? Not in and of itself, because parts of the Web are lousy. You need a tool to separate the good from the bad. I follow the admonition of Paul: "Prove all things; hold fast that which is good." Look at what people love about the Web and see if you can bring that same joy to distributed programming. Look at what people complain about and try to eliminate those things, whether you're building web applications or web services. When you find something that improves the distributed programming side of things, bring that knowledge back into the field of web application design.

Where Are The Tools? Traditionally this question has been answered by assertions that the (HTTP-level) tools already exist. As annoying as this answer is, it's technically correct. Without additional abstractions on top of HTTP/URI/*ML, there's nothing to write tools about. As we invent those abstractions (AtomPub, ROA, the ActiveResource control flow, OAuth, WADL, etc.) we write more tools. I'm writing tools now. As we get more practice the higher-level tools will get better, and as they get better they'll consolidate (as will the abstractions beneath them) and there will be fewer.

One thing I'd like to add is that it would be cool to see the existing frameworks for web applications apply some of the principles people have come up with while using the Web as a distributed programming environment. Make it easy to publish resources rather than operations, easy to support conditional GET, easy to write client interactions that respect statelessness. Rails has the right idea here.

: While I wait for the dubbed edition of GameCenter CX to come out on DVD, I've been watching Frankomatic's "Obscure Game Theater", a mile-long series of YouTube videos in which the aforementioned Frankomatic gives retro games his ganbatte best, except with more cursing than you get from Shinya Arino.

While watching one of these videos I realized that A Boy and His Blob, possibly the NES game to combine the coolest concept with the worst execution, takes place in New Jersey. You start off looking at what seems to be the Empire State Building across a body of water, and as you run to the right (ie. south) you see the Manhattan skyline with the World Trade Center at the far right. It's a very nice graphic, actually, probably digitized from a photo (Wikipedia and the SEC agree that Absolute Entertainment was based in New Jersey). I think the game is supposed to take place in Brooklyn instead of Hoboken, because it's got a subway station, but basic directionality says the game board is to the west of Manhattan.


Unless otherwise noted, all content licensed by Leonard Richardson
under a Creative Commons License.