Posts Tagged ‘rest’
Minimize coupling with REST processes
While integrating systems, implementing access or processes is typically achieved through man ordered list of steps, where one expects specific results from the server.
Expecting specific results means coupling your client to a servers behavior, something that we want to minimize.
In REST clients, and in business process modeled following REST practices, we can reach lesser decoupling factors. Imagine the following system (among others) that buys products at amazon.com:
When there is a required item
And there is a basket
But didnt desire anything
Then desire this item as first item
When there is a required item
And there is a basket
Then desire this item
When it is a basket
And there are still products to buy
Then start again
When there is a payment
Then release payment information
The above code works and can be seen running in the last video from this post. In this Rest modelled process, several types of evolutions can happen on the server, without breaking the client:
1. After adding a product to the basket, if the server renders recommendations instead of your basket, the client still achieve its goal.
2. If some products are not found, the client is still able to pay for it.
3. If the server does not have the products, but links to another system in a affiliated program, we as clients will never notice it, and buy the products.
4. If we want to achieve the same goal in other systems that understand the same media type that we do, simply change the entry point..
Note the benefits from backward compatibility, forward compatibility and we are still capable to achieve our goals without knowing the existence of some changes on the protocol. We can even compare prices without changing one line of code, everything using well known media types as Atom. RDFa support can also be implemented.
This evolution power is not achieved in traditional process modelling, and several times, compatibility is left aside. This is something that REST and hypermedia brought us.
Mapping our process in a adaptable way using hypermedia was described in an article recently for the WWW2010 proceedings.
Now let’s imagine the traditional modelling:
- Search
- Add
- Search
- Add
- Pay
In a typical web services + business processing scenario, how would a server notify me to access another system without changing one line of code in my client? It would break compatibility.
If the server now supports a limited quantity of items in your basket, your client will also break, while the adaptable REST client adapts itself and buys what it is capable of.
If our produt or company uses those traditional paths and every change on the service implies in heavy change costs for clients, thats a hint that your project is too much coupled, and a REST client as mentioned might help.
Tradicional web services decouple a little bit more than traditional RPC, but still provide this amount of coupling that cost our companies more than they were expecting to pay.
The entire video cast on the 3rd part from Rest from scratch is here:
This code achieves its results using Restfulie 0.8.0, with many thanks to everyone who contributed maturing this release: Caue Guerra, Luis Cipriani, Éverton Ribeiro, George GuimarĂ£es, Paulo Ahagon, Fabio Akita, Fernando Meyer and others.
Thanks to all feedback from Mike Amundsen, Jim Webber and Ian Robinson.
Is your client api code already adapting itself to changes?
(update: changed some of the “Then” clauses from the dsl for easier spotting of document messages instead of commands)
The Web and Rest (Rest from Scratch – Theory 1)
Rest is not only about http and the web, but an architectural style derived from the web and others.
In this 20 minutes video, we will see how the web has been capable of scaling, providing different services and being a more effective system than other human distributed systems alike (i.e. water distribution and electricity).
We move on to describing the basic characteristics of a rest architecture and how it leverages your system.
This is the first video on the Rest from Scratch – Theory. You can watch the other Rest From Scratch – Practice videos online here.
REST from scratch
This is the first post on a series of examples on how to use Rails and Restfulie on the server side and Restfulie and Mikyung on the client side to create REST based architectures.
On this 10 minutes video you will learn how it is possible to make several representations of one resource available on the server side (xml, json, atom and so on) and one line of code access and parse it on the client side.
This video shows how to access those basic REST api’s as Twitters, CouchDB which are based on http verbs and multiple URIs. The following videos shall demonstrate how to access more advanced REST API’s.
If you have any questions, join us at our mailing list.
Buying through REST: applying REST to the Enterprise
REST was a research result that left us with an open question, as its researcher suggested: it beautifully solves a lot of problems, but how to apply it on contemporary concerns that enterprise have?
REST Applied
After many talks, I have summed up a model, derived from REST constraints, that allows one to measure how his entire system (client and server) achieves a REST architecture.
The following video shows an example on how to start from a typical non restful architecture to adopting REST constraints and creating a buying process in any REST server.
So what is the power behing applied REST?
“Rest Applied” as I have exemplified, solves our contemporary concerns, filling the gap between Roy’s description and application’s usage, opening up a new world of possibilities.
The same way that REST ideas, although they were not called REST at that time, allowed web crawling to be an amazing client, “REST applied”, as described, can change the way our applications communicate with servers.
Why did we miss it? Because Roy’s description goes for with crawling examples, which benefit directly from content type negotiation. i.e. different languages, same resource and google post ranking it:
“In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content [39].”
But, “Not surprisingly, this exactly matches the user interface of a hypermedia browser. “… the client adapts itself to its current representation – limited to the client’s cleverness.
REST Applied takes those ideas to solve our problems, as some examples from Rest in Practice and procurement through rest.
Frameworks/libraries used
Restfulie gives better HTTP support to http libraries and provides a REST frameworks while Mikyung allows you to create your REST clients. With both of them you are ready to apply REST to enterprise problems.
Mikyung stands for “beauty capital” in Korean, in a attempt to reproduce what a beautiful rest client could look like when following REST constraints.
REST maturity model
Not yet REST
How do we achieve REST? Leonard Richardson’s model was widely commented and Martin Fowler posted on “Rest in Practice” (a book I recommend reading). But what is left out from REST in Richardson’s model and why?
According to his model, level 3 adds hypermedia support, leveraging a system through the use of linked data – a requirement for a REST architecture. But HATEOAS alone does not imply in REST, as Roy stated back in 2008.
Remember how method invocation on distributed objects allowed you to navigate through objects and their states? The following sample exemplifies such situation:
orders = RemoteSystem.locate().orders();
System.out.println(orders.get(0).getProducts().get(0));
receipt = order.payment(payment_information);
System.out.println(receipt.getCode());
But what if the above code was an EJB invocation? If navigating through relations is REST, implementing EJB’s protocol through HTTP would also be REST because linked data is also present in EJB’s code – although lacking an uniform interface.
While Richardson’s model get close to REST on the server side, Rest in Practice goes all way to a REST example, describing the importance of semantics and media type importance. The rest of the post will explain what was left out of this “Rest services” model and why, proposing a model that encompasses REST, not REST under http; while the next post, with a video, describes how to create a REST system.
What is missing?
Did the previous code inspect the relations and state transitions and adapted accordingly?
It did not choose a state transition, it contains a fixed set of instructions to be followed, no matter which responses are given by your server. If the API in use is http and the server returns with a “Server too busy” response, a REST client would try again 10 minutes later, but what does the above code do? It fails.
We are missing the step where REST clients adapt themselves to the resource state. Interaction results are not expected as we used to in other architectures. REST client behavior was not modelled on Richardson model because the model only thought about server side behavior.
This is the reason why there should be no such a thing as “rest web services” or “rest services”. In order to benefit from a REST architecture, both client and server should stick to REST constraints.
Richardson’s server + http model
Semantic meaningful relations are understood by the client, and because of that we need a model which describes how to create a REST system, not a REST server.
An important point to note is that this model is pretty good to show a REST server maturity over HTTP, but limiting REST analysis both to server and http.
A REST architecture maturity model
For all those reasons, I propose a REST maturity model which is protocol independent and covers both consumer and provider aspects of a REST system:
Trying to achieve REST, the first step is to determine and use an uniform interface: a default set of actions that can be taken for each well defined resource. For instance, Richardson’s assumes HTTP and its verbs to define a uniform interface for a REST over HTTP architecture.
The second step is the use of linked data to allow a client navigate through a resource’s state and relations in a uniform way. In Richardson’s model, this is the usage of hypermedia as connectedness.
The third step is to add semantic value to those links. Relations defined as “related” might have a significant value for some protocols, but less value for others, “payment” might make sense for some resources, but not for others. The creation and adoption of meaningful media types allows but do not imply in client code being written in a adaptable way.
The fourth step is to create clients in a way that decisions are based only in a resource representation relations, plus its media type understanding.
All of the above steps allow servers to evolve independently of a client’s behavior.
The last step is implied client evolution. Code on demand teach clients how to behave in specific situations that were not foreseen, i.e. a new media type definition.
Note that no level mentions an specific protocol as HTTP because REST is protocol independent.
The following post will describe one example on how to create a REST system using the above maturity model as a guide.
Contextual links in hypermedia content
Because resource meta data is sent through http headers on the human web we usually try to think it should be done in the same way in the RESTful web.
With the overall public acceptance of the Link header, I started to worry that some of the meta data that was important to dynamic resources would not be so easily understood by clients.
You can keep reading this post at my company’s blog here, better design, easier reading.
REST is crawling: early binding and the web without hypermedia
The most frequently asked question about REST in any presentation: why hypermedia is so important to our machine to machine software?
Is not early binding through fixed URI’s and using http verbs, headers and response codes better than what we have been doing earlier?
An approach that makes real use of all http verbs, http headers and response codes already presents a set of benefits. But there is not only the Accept header, not only 404, 400, 200 and 201 response codes: real use means not forgetting important verbs as PATCH and OPTIONS and supporting conditional requests. Not implementing features as automatic 304 (as a conditional requests) parsing means not using http headers and response codes as they can be used, but just providing this information to your system.
But if such approach already provides so many benefits, why would someone require a machine-to-machine software to use hypermedia? Is not it good enough to write code without it?
The power of hypermedia is related to software evolution, and if you think about how your system works right now (its expected set of resources and allowed verbs), hypermedia content might not help. But as soon as it evolves and creates a new set of resources, building unforeseen relations between them and their states (thus allowed verbs), that early binding becomes a burden to be felt when requiring all your clients to update their code.
Google and web search engines are a powerful system that makes use of the web. They deal with URIs, http headers and result codes.
If google’s bot was a statically coded bot that was uncapable of handling hypermedia content, it would require a initial – coding time or hand-uploaded – set of URIs coded that tells where are the pages on the web so it retrieves and parses it. If any of those resources creates a new relationship to other ones (and so on), Google’s early binding, static URIs bot would never find out.
This bot that only works with one system, one specific domain application protocol, one static site. Google would
not be able to spider any other website but that original one, making it reasonably useless. Hypermedia is vital to any crawling or discovery related systems.
Creating consumer clients (such as google’s bot) with early binding to relations and transitions do not allow system evolution to occur in the same way that late binding does, and some of the most amazing machine-to-machine systems on the web up to date are based in its dynamic nature, parsing content through hyperlinks and its semantic meaning.
Although we have chosen to show Google and web search engines as examples, any other web systems that communicate with a set of unknown systems (“servers”) can benefit from hypermedia in the same way.
Your servers can only evolve their resources, relations and states without requiring client-rewrite if your code allows service-crawling.
REST systems are based in this premise, crawling your resources and being able to access its well understood transitions through links.
While important systems have noticed the semantic value and power of links to their businesses, most frameworks have not yet helped users accomplishing late binding following the above mentioned principles.
When should I start a REST initiative
Restfulie’s release, centered on hypermedia support, got a lot of attention back to not letting go the HATEOAS idea and the old question arrives again: is it worthy to invest money or time building a fully REST system in my company?
A full REST architecture imply in many choices that some prefer to leave out, but it is interesting to see how people reacted to REST in the last few years.
Using google insights, the first thing to note – a biased query due to the selected keywords? – a continuing increase of interest in REST since 2004 in the programming area.
Surely, this can not be accepted as a single ‘soa is dead, long live rest’. In this result for the query, red means ‘soa’ and blue means ‘rest’.
A second search including ‘web services’ and ‘web service’ shows a decline for such words. For those who do not consider REST as a web service due to the general notion of WS being related to SOAP and WS-stacks, this is a positive aspect. The following result contains green and yellow meaning ‘webservices’, blue for ‘rest’ and red for ‘soa’.
if you compare searches for ejb, soap, corba and rest (blue is rest):
Finally, comparing those technologies to the growth of programming searches, rest is the only one whose growth is bigger than programming searches average:
If you are looking for a contemporaneous architectural style which is growing in its adoption, google seems to point you to REST. This information gives stronger hope to those who are putting their time, money or energy into REST architectures: they might have picked a good path.
A technology evolves faster as more people start using it. Although there is a long way to go with REST and its hypermedia features, it’s the only line going up.
When friends and clients ask if its time to try and learn it… definately.
note: this post is not about rest being good, or better than any other solution compared, just a collection of interesting outcomes from developer’s searches. Remember: there is no silver bullet.
Hypermedia and dynamic contracts: let my bandwidth rest!
“Break it” to scale!
Many systems contain webpages that are very similar to user “custom pages”, where they can configure what they want to see, and every piece is aggregated from different sources into one single page.
In some cases, those are widget based frameworks as wicket and gwt that can be added to my custom page; in other cases you have aggregating portals.
An example of this kind of application (even though its not configurable) is a retail website containing four sections in its home page: the top 10, my orders, random items, and weird items.
In this case, all information come from the same source, but every part has a different probable validity if it is going to be cached. If the page is served as one big chunck of information, it will always be stale due to the random items section. “My orders” is stale only when I place a new order and, in the same way, the top 10 is only stale if any item is bought and surpasses the number of times the 10th item was bought so far.
One of the main issues with this type of pages which aggregate information from one or many sources with different expire-expectations is that cached versions in proxies and clients become stale faster than it should for some elements: once one of this providing sources publishes new information or is updated, the entire representation becomes stale..
Martin Fowler described once a well spread approach to allow those pages to be partially cached within local proxies and clients, thus sharing requested representations between multiple users.
The approach
Given the coffee scenario, one would create different json representations:
- http://restbucks.com/top_sellers
- http://restbucks.com/my_orders
- http://restbucks.com/weird_items
- http://restbucks.com/random_item
And finally an aggregating page:
<html> <a class="lazy_load" href="http://restbucks.com/top_sellers">Top sellers</a> <a class="lazy_load" href="http://restbucks.com/my_orders">My orders</a> <a class="lazy_load" href="http://restbucks.com/random_items">Random items</a> <a class="lazy_load" href="http://restbucks.com/weird_items">Weird items</a>
And then, for each lazy_load link, we create a div with its content:
<script> $('.lazy_load').each(function(link) { uri = link.attr('href'); div = $('').load(uri); // cache hits! link.after(div); }); </script> </html>
This allows our proxies to cache each component in our page apart from the page itself: whenever one page’s content becomes stale in a proxy, only part of that page needs update.
In a web were most data can be cached and does not become stale so fast, this technique should usually lessen the amount of data being transfered between client and server.
All one needs to do is properly use the http headers for caching.
Remember that if your client supports either parallel requests to the server and/or keep-alive connection, the results might be even better.
Distributed systems? Linked resources?
Roy Fielding mentions that in the data view in REST systems, “small or medium-grain messages are used for control semantics, but the bulk of application work is accomplished via large-grain messages containing a complete resource representation.”
Pretty much in the same way as with the human web, a distributed system using the web as its infrastructure will gain the same cache benefits as long as they implement correct caching policies through http headers (and correct http verbs).
When your server provides a resource representation linking to a series of other related resources the client and proxies staying on the way will be allowed to cache each and every other resource on its own.
This approach results, again, in changes applied to one resource not affecting cached representations of other resources. An stale representation will not affect those accessing other resources within the same context.
Sometimes the decision whether to change latency for scalability might depend on how you think your clients will use your resources: in the human web mentioned above, the developer knew exactly how its clients would access it.
In distributed systems using REST, guessing how resources will be used can be dangerous as it allows you to tight couple yourself to this behaviour while published resources can and would be used in unforeseen ways.
Roy’s dissertation seems to apply here to balance things: “a protocol that requires multiple interactions per user action, in order to do things like negotiate feature capabilities prior to sending a content response, will be perceptively slower than a protocol that sends whatever is most likely to be optimal first and then provides a list of alternatives for the client to retrieve if the first response is unsatisfactory”.
Giving information that will help most cases is fine and providing links to further resources details allow you to balance between latency and scalability (due to caching) as you wish.
Dynamic contracts
This is only possible because we have signed dynamic contracts with our clients. They expect us to follow some formal format definition (defined in xhtml) and processes. How our processes are presented within our representations is the dynamic part of the contract.
While the fixed part can be validated with the use of schema validators, the dynamic part – the process – which is guided by our server needs to be validated through testing the behaviour of our applications: asserting that hypermedia guided transitions should be reflected in our application state.
Nowadays
On the other hand, many contemporary systems use the POST verb receiving a response including many representations at once or the GET verb without any cache related headers: thus not profiting from the web infrastructure at all. This could changed with one (or both) of the following:
- use the GET verb with cache headers
- use hypermedia and micro formats to describe relations between resources
Using it might present similar results as hypermedia+GET+cache headers in the human web – and some styles might already be providing support for it, although not being a constraint.
Note that in this case hypermedia is not driving the application state, but helping with scalability issues.
Progressive enhancement
Martin notes that this is a kind of progressive enhancement: although its definition is related to accessibility, its control over bandwidth benefits are similar to the approach mentioned ones.
Any other systems that use hyperlinks to “break” representations and scale?
Hypermedia: making it easier to create dynamic contracts
The human web and christmas gifts
You have been buying books at amazon.com for 5 years now: typing http://www.amazon.com in your browser, searching for your book, adding it to the cart and entering your credit card information.
But this year, on December 15th 2009 something new happens. Amazon has launched an entire new “christmas discount program” and in their front page there is a huge ad notifying their clients about this new item.
How do you react?
“Contract violated! I am not buying anything today.”
The key issue in loosely coupled systems is the ability to evolve one side without implying in any modifications on the other part.
As some Rest guys agree, hypermedia content was the factor which allowed such situations to happen in the human web without clients screaming “i don’t know what to do now that there is a black friday clearance!” or “there is a new link in this page, let me email the ‘webmaster’ and complain about it“.
In the human web, some contracts are agreed upon and validated through end-to-end tests. Some companies will use tools as selenium-rc, webdriver or cucumber to drive their tests and ensure that expected behaviour by their clients does not break with a new release of their software.
Those tests do not validate all content, though, giving space for what is called forward-compatibility: the system is free to create new functionalities without breaking previous expected behaviour.
But my rest-client is not human
In the non-human web, the most well known media type used is xml, although not hypermedia-capable. There are a couple of ways to create forward or backward-compatible schemas that check xml structures, but – unfortunately – usually
fixed schemas will not invest part of its contract in order to making it forward-compatible: its an optional feature.
One option is to create “polymorphic” types through xsd schemas, which will get nasty if your system evolves continuously – not once every year – and you find yourself in a schema-hell situation.
One easy solution is to accept anything in too many places, which seems odd.
What are we missing then? According to Subbu Allamaraju, in RESTful applications, “only a part of the contract can be described statically, and the rest is dynamic and contextual”: you tell your client that they can believe you will not break the statically contract – you might use some schema validation to do that – and it’s up to you on the server side to not do it on the dynamic part.
Some might think it sounds too loose… let’s recall the human web again:
- xhtml allows you to validate your system’s fixed contract
- it’s up to you not to remove an important form used throughout the buying process
So, what are the dynamic parts of my “contract”?
In a RESTful application the contract depends on its context, which is highly affected by three distinct components:
1. your resource’s state
If a person had his application denied to open an account, your resource representation will not offer a “create_loan” request. A denied application is an information regarding its state.
While your company and application evolves, its common to find ourselves in a position where new states appear.
2. your resource’s relations
In a book store (i.e. amazon a few years ago), a book might have a category associated with it so you can access other similar books:
<book> <name>Rest if you do not want to get tired</name> <link rel="category" href="http://www.caelumobjects.com/categories/self-help" /> </book>A couple years later, your system might add extra relations, as "clients which bought this book also recommend"
<book> <name>Rest if you do not want to get tired</name> <link rel="category" href="http://www.caelumobjects.com/categories/self-help" /> <link rel="recommendation" href="http://www.caelumobjects.com/books/take-a-shower-with-a-good-soap-if-you-need-to-rest" /> </book>
When your company and application evolves, its common to find ourselves in a position where new relations appear.
3. your resource’s operations
In a REST application, your resource operation’s are represented by HTTP verbs: supporting a new one will not affect clients which use all other available verbs so far.
In the RPC/Webservices world, new operations would be implemented creating new remote procedures or services.
But how can my clients be sure that I will not break the dynamic contract?
Pretty much in the same way that you do in the human web: it’s your word.
In the human web, how do we guarantee that we will not remove or break some functionality the user expects to be there? We end-to-end automatically test its behaviour.
Our word (our tests) is the only reason to rest without worries that we will not break our client’s expectations. The same holds on the non-human web.
The dynamic contract should be throughly tested in order to not break our client’s expectations.
There are other approaches (as client-aware contracts) which might add some extra coupling between both sides.
HTTP+XML+ATOM gives us the possibility to work with both the fixed (schema validated) and dynamic (test validated) contract.
As Bill Burke pointed in a comment, “you can design your XML schemas to be both flexible and backward compatible ” and “companies, users, developers desire this contract”.
That’s the good points of using schemas, but its not everyone that use them in a flexible and backward compatible way. Even those who use might have a little bit of hard time to support it, i.e. having to maintain more than one entry point for each version of their schemas.
That’s when we can use the good points of the schema validation, as Bill pointed out, with the easy evolution advantages of a dynamic contract: as we do in the human web.
By using dynamic contracts as xml+atom following the Must Ignore rules, forward and backward compatibility is gained by default, independent on what the user does – assuming that tests are a must in any solution.
Dynamic contracts also give hints for frameworks, as they guide you on what your user can and can not do or access, but maybe not for tools, in a different fashion of what fixed contracts do: with a fixed schema I would be able to pre-generate my classes, while with dynamic schemas I the framework inject methods.
That’s why we try to take an approach which force programmers to adopt xml+atom. The entry point on the Restfulie framework is loosely evolution.
Its first example, the documentation and its examples do not focus on how easy it is to use nice URIs and the 4 most famous http verbs, but how easy it is to evolve your system using hypermedia and http: uri’s come soon afterwards.
And it seems to be working fine to far, the first developers using it in live systems have already supported hypermedia content as a way to guide clients through their systems.
Restfulie support in dynamic contracts
Matt pulver’s extension to Rails allows one to instantiate types with regards to their active record relations and attributes, but it requires every xml element to be present (strong coupling to the data structure presented by the server).
Using Jeokkarak (korean hashis), Restfulie instantiate objects matching your local data structure, supporting fields defined in your attributes and inserting extra fields for those elements unknown to your model.
For example, if you have a model as:
class Bill attr_accessor :value, :to_date end
And the following xml:
<bill> <value>100</value> <to-date>10/10/2010</to-date> <taxes>0.07</taxes> </bill>
The result is a dynamic object capable of answering to:
bill = Bill.from_web uri puts bill.value puts bill.to_date puts bill.taxes
If your model was ready to accept such xml, Restfulie will do the job, whilst if it doesn’t recognize the attribute, it will still be available to you.
That’s the default Restfulie behaviour: to allow the other part to evolve their dynamic contract (and even parts of the fixed one) by default, without any extra effort from your side.