"Has REST Been Fortunate in Its Enemies?" I really hope the whole "enemies" thing goes away and that the period 2000-2005 is seen in retrospect as a big misunderstanding. Just in the period since RWS was published there have been really interesting developments (OAuth, PATCH) that have advanced the state of the HTTP art, and we lost five years of that kind of thing going down the wrong path and then having a huge argument about which path was right. That said, when your enemies turn out to be big corporations with lots of money (as always seems to happen to me), I would not describe the situation as fortunate.
Schema-driven mapping: probably hopeless if you want a really detailed mapping to plug-and-chug into your strongly-typed language. Even if possible, very boring. Not too difficult if you just want an index that points out the interesting parts of a document.
Contracts: My personal crackpot theory that I can't get anyone to believe is that hypermedia is contracts. A link or form is a promise that if you make a certain HTTP request, you'll get a certain result. HTML is a primitive contract language. The AtomPub service document format and WADL are more advanced contract languages. That said, I think contracts are less useful than often supposed, because the ELIZA effect artificially narrows the distance between the syntax of the contract document and its semantics. An AtomPub service document works like magic because a human programmer did some work beforehand understanding the AtomPub standard and programming in the semantics.
Registries: We have registries on the human web; we should be able to have them for web services. How much of their imagined benefit is due to the ELIZA effect? How much work will a human need to put in to find the one of 10000 AtomPub implementations they want to post to? I don't know. It depends on the conventions and standards we come up with.
Payload wrappers: HTTP is a sufficient payload wrapper for my needs, and if I ran into a problem with it I'd extend it (a la OAuth), not that I feel competent to do that. I can kind of see how you'd consider Atom a payload wrapper, but I'm just a simple caveman and I prefer to think of it as a representation format.
Message-level signing and security: yes! to the former. OAuth works for me. I don't know if I'd ever use message-level security. In general, experimentation is good when it happens on top of HTTP where everyone can use the new standards.
"Is Getting HTTP Right Good Enough?" No, not in the way we mean "HTTP" when we have this never-ending argument about verbs. I don't even think it's the first step. The first step is to expose resources: to give a distinct URI to every object you want to publish. That's the first of the RESTful rehabilitation suggestions on page 303 of RWS. That's the first three steps of the ROA design procedure on page 148. The first document to understand is not the Fielding thesis, not RFC 2616, not even RWS, it's Architecure of the World Wide Web, Volume One. That document is mostly about URIs. HTTP shows up as a URI scheme and as a popular protocol for dereferencing URIs.
This is why Flickr and del.icio.us put up terribly-designed web services and people love them and use them all the time: they stick URIs on everything. People love URIs! Their problem with those services is they don't know when to stop. So the first step is to get on the web, to not ignore the intuitively obvious usefulness of URIs. It's terrible that these web services self-identify as "RESTful", but at least they've taken the first step.
The second step is to know when to stop handing out URIs. Picking a vocabulary lets you move some of the information out of the URI and into the HTTP method. It lets you expose objects that have methods, instead of a big C library of del.icio.us or Flickr-style functions. This is the step where you master HTTP. This is where the arguments are happening right now. These arguments are not based on intuitively obvious things. They're arguments about scalability and fiasco avoidance and client and network capabilities. They're also arguments about whether using all of HTTP would improve the Web.
The third step is the mastery of hypermedia. (Note that there's one step for each of the Web technologies: URI, HTTP, HTML.) This is about making it possible to program a computer to do the mapping of options to actions that happens in our brains when we surf the web. The advantage here is the same as in the other steps: loose coupling and flexibility. This is the part that confuses everybody, because we don't usually write programs like this unless we're scripting or screen-scraping. And the ELIZA effect is in play, because we've moved past HTTP requests and responses to what happens inside us when we look at response N and decide to make request N+1.
To the extent that there is a "religious" aspect to RESTful thinking, I think it's on this level, in the intuition that there's something in the high-level way we use the Web that we can use when programing a client. Maybe it will turn out to be a bust like AI in the 80s. I've had some encouraging results on my current project. I think it's likely that we're only now figuring this out because we spent five years arguing over whether to use URIs and then another three arguing over whether to use GET/POST or GET/POST/PUT/DELETE.
(2) Tue Aug 19 2008 11:52 Request Weblog #Frog.Frog:
OK, now on to the specific weblog entry on which Rachel asked me for my thoughts. These are mostly based on the work I've done designing and building services since RWS was published.
- Comments:
Posted by Riana at Tue Aug 19 2008 12:30
Wait, every time a door says "EXIT" and I open it and it's just another hallway, do I have a breach of contract claim against the building owner?Interesting idea.
Posted by Leonard at Thu Aug 21 2008 08:07
I'd say that the contract is fulfilled so long as recursively following "EXIT" signs brings you monotonically closer to the exit.