Posts Tagged ‘hypermedia’
Expecting specific results means coupling your client to a servers behavior, something that we want to minimize.
When there is a required item
And there is a basket
But didnt desire anything
Then desire this item as first item
When there is a required item
And there is a basket
Then desire this item
When it is a basket
And there are still products to buy
Then start again
When there is a payment
Then release payment information
The above code works and can be seen running in the last video from this post. In this Rest modelled process, several types of evolutions can happen on the server, without breaking the client:
1. After adding a product to the basket, if the server renders recommendations instead of your basket, the client still achieve its goal.
2. If some products are not found, the client is still able to pay for it.
3. If the server does not have the products, but links to another system in a affiliated program, we as clients will never notice it, and buy the products.
4. If we want to achieve the same goal in other systems that understand the same media type that we do, simply change the entry point..
Note the benefits from backward compatibility, forward compatibility and we are still capable to achieve our goals without knowing the existence of some changes on the protocol. We can even compare prices without changing one line of code, everything using well known media types as Atom. RDFa support can also be implemented.
This evolution power is not achieved in traditional process modelling, and several times, compatibility is left aside. This is something that REST and hypermedia brought us.
Now let’s imagine the traditional modelling:
In a typical web services + business processing scenario, how would a server notify me to access another system without changing one line of code in my client? It would break compatibility.
If the server now supports a limited quantity of items in your basket, your client will also break, while the adaptable REST client adapts itself and buys what it is capable of.
If our produt or company uses those traditional paths and every change on the service implies in heavy change costs for clients, thats a hint that your project is too much coupled, and a REST client as mentioned might help.
Tradicional web services decouple a little bit more than traditional RPC, but still provide this amount of coupling that cost our companies more than they were expecting to pay.
The entire video cast on the 3rd part from Rest from scratch is here:
This code achieves its results using Restfulie 0.8.0, with many thanks to everyone who contributed maturing this release: Caue Guerra, Luis Cipriani, Éverton Ribeiro, George Guimarães, Paulo Ahagon, Fabio Akita, Fernando Meyer and others.
Is your client api code already adapting itself to changes?
(update: changed some of the “Then” clauses from the dsl for easier spotting of document messages instead of commands)
This 20 minutes video shows how to move from a basic REST api to one which makes use of linked resources and adds semantic values to those links.
Did your REST api do that already? Great.
Otherwise, it’s time to move ahead and decouple a little bit further your clients from your server.
REST was a research result that left us with an open question, as its researcher suggested: it beautifully solves a lot of problems, but how to apply it on contemporary concerns that enterprise have?
The following video shows an example on how to start from a typical non restful architecture to adopting REST constraints and creating a buying process in any REST server.
So what is the power behing applied REST?
“Rest Applied” as I have exemplified, solves our contemporary concerns, filling the gap between Roy’s description and application’s usage, opening up a new world of possibilities.
The same way that REST ideas, although they were not called REST at that time, allowed web crawling to be an amazing client, “REST applied”, as described, can change the way our applications communicate with servers.
Why did we miss it? Because Roy’s description goes for with crawling examples, which benefit directly from content type negotiation. i.e. different languages, same resource and google post ranking it:
“In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content .”
But, “Not surprisingly, this exactly matches the user interface of a hypermedia browser. “… the client adapts itself to its current representation – limited to the client’s cleverness.
Restfulie gives better HTTP support to http libraries and provides a REST frameworks while Mikyung allows you to create your REST clients. With both of them you are ready to apply REST to enterprise problems.
Mikyung stands for “beauty capital” in Korean, in a attempt to reproduce what a beautiful rest client could look like when following REST constraints.
Not yet REST
How do we achieve REST? Leonard Richardson’s model was widely commented and Martin Fowler posted on “Rest in Practice” (a book I recommend reading). But what is left out from REST in Richardson’s model and why?
According to his model, level 3 adds hypermedia support, leveraging a system through the use of linked data – a requirement for a REST architecture. But HATEOAS alone does not imply in REST, as Roy stated back in 2008.
Remember how method invocation on distributed objects allowed you to navigate through objects and their states? The following sample exemplifies such situation:
orders = RemoteSystem.locate().orders();
receipt = order.payment(payment_information);
But what if the above code was an EJB invocation? If navigating through relations is REST, implementing EJB’s protocol through HTTP would also be REST because linked data is also present in EJB’s code – although lacking an uniform interface.
While Richardson’s model get close to REST on the server side, Rest in Practice goes all way to a REST example, describing the importance of semantics and media type importance. The rest of the post will explain what was left out of this “Rest services” model and why, proposing a model that encompasses REST, not REST under http; while the next post, with a video, describes how to create a REST system.
What is missing?
Did the previous code inspect the relations and state transitions and adapted accordingly?
It did not choose a state transition, it contains a fixed set of instructions to be followed, no matter which responses are given by your server. If the API in use is http and the server returns with a “Server too busy” response, a REST client would try again 10 minutes later, but what does the above code do? It fails.
We are missing the step where REST clients adapt themselves to the resource state. Interaction results are not expected as we used to in other architectures. REST client behavior was not modelled on Richardson model because the model only thought about server side behavior.
This is the reason why there should be no such a thing as “rest web services” or “rest services”. In order to benefit from a REST architecture, both client and server should stick to REST constraints.
Richardson’s server + http model
Semantic meaningful relations are understood by the client, and because of that we need a model which describes how to create a REST system, not a REST server.
An important point to note is that this model is pretty good to show a REST server maturity over HTTP, but limiting REST analysis both to server and http.
A REST architecture maturity model
For all those reasons, I propose a REST maturity model which is protocol independent and covers both consumer and provider aspects of a REST system:
Trying to achieve REST, the first step is to determine and use an uniform interface: a default set of actions that can be taken for each well defined resource. For instance, Richardson’s assumes HTTP and its verbs to define a uniform interface for a REST over HTTP architecture.
The second step is the use of linked data to allow a client navigate through a resource’s state and relations in a uniform way. In Richardson’s model, this is the usage of hypermedia as connectedness.
The third step is to add semantic value to those links. Relations defined as “related” might have a significant value for some protocols, but less value for others, “payment” might make sense for some resources, but not for others. The creation and adoption of meaningful media types allows but do not imply in client code being written in a adaptable way.
The fourth step is to create clients in a way that decisions are based only in a resource representation relations, plus its media type understanding.
All of the above steps allow servers to evolve independently of a client’s behavior.
The last step is implied client evolution. Code on demand teach clients how to behave in specific situations that were not foreseen, i.e. a new media type definition.
Note that no level mentions an specific protocol as HTTP because REST is protocol independent.
The following post will describe one example on how to create a REST system using the above maturity model as a guide.
The most frequently asked question about REST in any presentation: why hypermedia is so important to our machine to machine software?
Is not early binding through fixed URI’s and using http verbs, headers and response codes better than what we have been doing earlier?
An approach that makes real use of all http verbs, http headers and response codes already presents a set of benefits. But there is not only the Accept header, not only 404, 400, 200 and 201 response codes: real use means not forgetting important verbs as PATCH and OPTIONS and supporting conditional requests. Not implementing features as automatic 304 (as a conditional requests) parsing means not using http headers and response codes as they can be used, but just providing this information to your system.
But if such approach already provides so many benefits, why would someone require a machine-to-machine software to use hypermedia? Is not it good enough to write code without it?
The power of hypermedia is related to software evolution, and if you think about how your system works right now (its expected set of resources and allowed verbs), hypermedia content might not help. But as soon as it evolves and creates a new set of resources, building unforeseen relations between them and their states (thus allowed verbs), that early binding becomes a burden to be felt when requiring all your clients to update their code.
Google and web search engines are a powerful system that makes use of the web. They deal with URIs, http headers and result codes.
If google’s bot was a statically coded bot that was uncapable of handling hypermedia content, it would require a initial – coding time or hand-uploaded – set of URIs coded that tells where are the pages on the web so it retrieves and parses it. If any of those resources creates a new relationship to other ones (and so on), Google’s early binding, static URIs bot would never find out.
This bot that only works with one system, one specific domain application protocol, one static site. Google would
not be able to spider any other website but that original one, making it reasonably useless. Hypermedia is vital to any crawling or discovery related systems.
Creating consumer clients (such as google’s bot) with early binding to relations and transitions do not allow system evolution to occur in the same way that late binding does, and some of the most amazing machine-to-machine systems on the web up to date are based in its dynamic nature, parsing content through hyperlinks and its semantic meaning.
Although we have chosen to show Google and web search engines as examples, any other web systems that communicate with a set of unknown systems (“servers”) can benefit from hypermedia in the same way.
Your servers can only evolve their resources, relations and states without requiring client-rewrite if your code allows service-crawling.
REST systems are based in this premise, crawling your resources and being able to access its well understood transitions through links.
While important systems have noticed the semantic value and power of links to their businesses, most frameworks have not yet helped users accomplishing late binding following the above mentioned principles.
Restfulie 0.5.0 is out and its major new feature is its support to Atom feeds with variable media types.
A feed can be easily rendered by invoking the to_atom method:
@hotels = Hotel.all
render :content_type => 'application/atom+xml',
:text => @hotels.to_atom(:title=>'Hotels', :controller => self)
A collection might contain entries with different media types and each one will be rendered in its own way. The client code works as any other usual client would, note that there is still content negotiation taking place:
hotels = Restfulie.at('http://localhost:3000/hotels').get
And using hypermedia to drive our business, deleting an entry will send a DELETE request to that entry’s self URI:
Restfulie.at('http://localhost:3000/hotels').get.each do |h|
h.delete if h.city=="Sao Paulo"
In future releases we expect to support atom feed generation (and consuming) through Ratom.
Another easy to use feature is content type negotiation, which also happens when rendering a single resource:
@hotel = Hotel.find(params[:id])
Users can now extend Restfulie in order to create custom formats (not based on json/xml/xhtml related media types)
Entry points have been extended and now they can be reached even if there is no class representing the received information in the client side:
One can also access the web response:
response = Restfulie.at('http://localhost:3000/cities/5').get.web_response
And play with content negotiation:
In the server side, a generic controller has been created which supports show, delete and create by default.
There is also a new method called render_created that can be used in order to answer a request with a 201 Created response and its resource location.
Restfulie got its own website and all Ruby docs have been migrated.
Thanks to all the team and collaborators!
If you look carefully, you might find out next week’s upcoming news.
Due to the last posts on infoq related to Restfulie, my work at Caelum Objects involved a presentation at one client, “Beginning a REST initiative” (based on Ian’s work) and the question came up: “but how do I control transactions without a custom software stack to help me?”
The answer was, “you do not need to”.
Restwiki has an old entry on how to implement transaction support through http using some non-standard http headers.
In practice most ideas are based on a transaction being a resource named “Transaction”: an idea heavily based on HTTP and URIs, but forgetting about HATEOAS – again.
In the human web, how does one buys some products? Every product is added to the shopping basket, which then generates the order. Does the user creates a transaction before processing his order?
The human being behing the computer did not create a transaction: the browser is even unaware of that concept, but hyperlinks given by the server guided the client through this “transaction”. In this case, where the typical “REST” solution would create a “Transaction” resource and use the non-standard header to support it, a Restfulie one creates a shopping basket:
|Typical “REST” approach||Restfulie|
POST /product *
POST /product *
|commit||POST /transaction/commit *||PUT /basket/:id/payment|
|rollback||DELETE /transaction/ *||DELETE /basket/:id|
* with non-standard http header
The standard way of thinking about transactions is to not use HATEOAS and believe that transactions are resources by themselves. Transactions are not resources, but a tool to implement ACID in your (i.e.) databases, not in a web system.
In our example, an order creation maps to internal transactions. In a bank example, a Transfer resource would map to the internal transaction.
By renaming the “transaction” to the real objective of that transaction, one can better map meaningful URI’s to resources.
Note that these are only the advantages of valuing the use of URIs over non-standard http headers (manifest hint?): there is no loss of visibility to layers between the client and the server.
But now one might argue that there are too many entry points. Actually, both implementations contain the same number of “entry” points if there is no hypermedia support: 4. Too many entry points should not be called “entry” points. (entry-hell pattern?)
But do we, in the human web, type in URIs as we go further with our online “transaction”? Do we type in URIs as we do a two-step flight and hotel booking process?
If the entry point POST /basket answers with a:
<link rel="products" href="http://caelumobjects.com/basket/5/products" />
<link rel="coupon" href="http://caelumobjects.com/basket/5/coupon" />
<link rel="pay" href="http://caelumobjects.com/basket/5/payment" />
<link rel="cancel" href="http://caelumobjects.com/basket/5" />
Note that our basket – our transaction’s meaning – contains hints on how to operate with it and its relations pretty much in the same way that it would do in the human web: dynamically generated links that allows the server to guide the client throughout the process, eliminating the need to extra “entry-points”.
In a hotel and flight booking system, the booking POST result could be represented as:
<link rel="flights" href="http://caelumobjects.com/booking/5/flights" />
<link rel="hotels" href="http://caelumobjects.com/booking/5/hotels" />
<link rel="pay" href="http://caelumobjects.com/booking/5/payment" />
<link rel="cancel" href="http://caelumobjects.com/booking/5" />
Note how the first idea on implementing transactions evolved. From a custom header which interferes with visibility and creates the need for custom built clients and layers to understand this instruction, with no server guidance at all to a system where there is no need to customize your client api or layers and the server guides the user flow through hypermedia, maturing your system.
Transactions should not be called “transactions”. The basket or transfer resource are examples of that: they are typical server side implemented transactions that should be actual resources.
Our basket (and thus transfer) seems to match Roy’s comment at that time:
- “As far as the client is concerned, it is only interacting with one resource at a time even when those interactions overlap asynchronously.”: the basket or the transfer
- “There is no “transaction protocol” aside from whatever agreement mechanism is implemented in the back-end in accordance with the resource semantics (in a separate architecture that we don’t care about here).”: you add products to the list of products form that basket, add some coupons and so on
- “There is no commit protocol other than the presentation of various options to the client at any given point in the application.”: hateoas
- “There is no need for client-side agreement with the transaction protocol because the client is only capable of choosing from the choices provided by the server.”: transaction protocol? no transaction protocol here, just a simple resource handling
Restfulie – as many other rest frameworks -already support the first step (running away from the custom header), but goes further when being “hypermedia centric”, it allows the developer to implement it without any effort.
Being opiniated and forcing the adoption of hypermedia as a way to guide or clients through out our processes might be one step ahead into more web (rest in this case?) friendly world as Ryan Riley pointed out.
HATEOAS, HTTP and URIs allow you to eliminate the concept of transaction management (and web transaction managers) from your systems as we usually think of them. There are two steps to follow:
1. there are no transactions
2. let the server guide you, do not try to guide him with multiple entry points