Guilhermesilveira's Blog

as random as it gets

REST is crawling: early binding and the web without hypermedia

with 6 comments

The most frequently asked question about REST in any presentation: why hypermedia is so important to our machine to machine software?

Is not early binding through fixed URI’s and using http verbs, headers and response codes better than what we have been doing earlier?

An approach that makes real use of all http verbs, http headers and response codes already presents a set of benefits. But there is not only the Accept header, not only 404, 400, 200 and 201 response codes: real use means not forgetting important verbs as PATCH and OPTIONS and supporting conditional requests. Not implementing features as automatic 304 (as a conditional requests) parsing means not using http headers and response codes as they can be used, but just providing this information to your system.

But if such approach already provides so many benefits, why would someone require a machine-to-machine software to use hypermedia? Is not it good enough to write code without it?

The power of hypermedia is related to software evolution, and if you think about how your system works right now (its expected set of resources and allowed verbs), hypermedia content might not help. But as soon as it evolves and creates a new set of resources, building unforeseen relations between them and their states (thus allowed verbs), that early binding becomes a burden to be felt when requiring all your clients to update their code.

Google and web search engines are a powerful system that makes use of the web. They deal with URIs, http headers and result codes.

If google’s bot was a statically coded bot that was uncapable of handling hypermedia content, it would require a initial – coding time or hand-uploaded – set of URIs coded that tells where are the pages on the web so it retrieves and parses it. If any of those resources creates a new relationship to other ones (and so on), Google’s early binding, static URIs bot would never find out.

This bot that only works with one system, one specific domain application protocol, one static site. Google would
not be able to spider any other website but that original one, making it reasonably useless. Hypermedia is vital to any crawling or discovery related systems.

Creating consumer clients (such as google’s bot) with early binding to relations and transitions do not allow system evolution to occur in the same way that late binding does, and some of the most amazing machine-to-machine systems on the web up to date are based in its dynamic nature, parsing content through hyperlinks and its semantic meaning.

Although we have chosen to show Google and web search engines as examples, any other web systems that communicate with a set of unknown systems (“servers”) can benefit from hypermedia in the same way.

Your servers can only evolve their resources, relations and states without requiring client-rewrite if your code allows service-crawling.

REST systems are based in this premise, crawling your resources and being able to access its well understood transitions through links.

While important systems have noticed the semantic value and power of links to their businesses, most frameworks have not yet helped users accomplishing late binding following the above mentioned principles.

Advertisements

Written by guilhermesilveira

February 7, 2010 at 8:00 am

Posted in restful, soa

Tagged with , , , , ,

6 Responses

Subscribe to comments with RSS.

  1. I think this is the same concept inferred by existing writings about ‘discoverability’ and the hypertext constraint, and an objective of specific projects such as linked data and the semantic web.

    Mike

    February 9, 2010 at 11:27 am

  2. I agree with you, Mike. Seems like there is no way to disconnect REST evolutioned systems with linked data and the semantic web (thus its a pity that www2010 will hold similar workshops on the same day)

    guilhermesilveira

    February 9, 2010 at 2:28 pm

  3. […] This post was mentioned on Twitter by caueguerra, Tiago "Pacman&q, Anderson Leite, Guilherme Silveira, Alberto Luiz Souza and others. Alberto Luiz Souza said: RT: @sergioazevedo: rest, crawling and why hypermedia #caelum #rest http://bit.ly/9bsybm (via @guilhermecaelum) […]

  4. […] REST is crawling: early binding and the web without hypermedia […]

  5. […] Running – A look at the initial version of the REST procurement service. (By Jan Algermissen) REST is crawling: early binding and the web without hypermedia – A discussion on early vs. late binding in REST applications. (By Guilherme Silveira) Using […]

  6. […] em clientes, temos um servidor com forte acoplamento a contratos (schemas e infinitos verbos) que inibem a evolucão de seu servidor independente de seu cliente, o que chamamos de […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: