Guilhermesilveira's Blog

as random as it gets

Posts Tagged ‘restful

REST maturity model

with 3 comments

Not yet REST

How do we achieve REST? Leonard Richardson’s model was widely commented and Martin Fowler posted on “Rest in Practice” (a book I recommend reading). But what is left out from REST in Richardson’s model and why?

According to his model, level 3 adds hypermedia support, leveraging a system through the use of linked data – a requirement for a REST architecture. But HATEOAS alone does not imply in REST, as Roy stated back in 2008.

Remember how method invocation on distributed objects allowed you to navigate through objects and their states? The following sample exemplifies such situation:


orders = RemoteSystem.locate().orders();
System.out.println(orders.get(0).getProducts().get(0));
receipt = order.payment(payment_information);
System.out.println(receipt.getCode());

But what if the above code was an EJB invocation? If navigating through relations is REST, implementing EJB’s protocol through HTTP would also be REST because linked data is also present in EJB’s code – although lacking an uniform interface.

While Richardson’s model get close to REST on the server side, Rest in Practice goes all way to a REST example, describing the importance of semantics and media type importance. The rest of the post will explain what was left out of this “Rest services” model and why, proposing a model that encompasses REST, not REST under http; while the next post, with a video, describes how to create a REST system.

What is missing?

The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations.“.

Did the previous code inspect the relations and state transitions and adapted accordingly?
It did not choose a state transition, it contains a fixed set of instructions to be followed, no matter which responses are given by your server. If the API in use is http and the server returns with a “Server too busy” response, a REST client would try again 10 minutes later, but what does the above code do? It fails.

We are missing the step where REST clients adapt themselves to the resource state. Interaction results are not expected as we used to in other architectures. REST client behavior was not modelled on Richardson model because the model only thought about server side behavior.

This is the reason why there should be no such a thing as “rest web services” or “rest services”. In order to benefit from a REST architecture, both client and server should stick to REST constraints.

Richardson’s server + http model

Semantic meaningful relations are understood by the client, and because of that we need a model which describes how to create a REST system, not a REST server.

An important point to note is that this model is pretty good to show a REST server maturity over HTTP, but limiting REST analysis both to server and http.

A REST architecture maturity model

For all those reasons, I propose a REST maturity model which is protocol independent and covers both consumer and provider aspects of a REST system:

Rest Architecture Maturity Model

Rest Architecture Maturity

Trying to achieve REST, the first step is to determine and use an uniform interface: a default set of actions that can be taken for each well defined resource. For instance, Richardson’s assumes HTTP and its verbs to define a uniform interface for a REST over HTTP architecture.

The second step is the use of linked data to allow a client navigate through a resource’s state and relations in a uniform way. In Richardson’s model, this is the usage of hypermedia as connectedness.

The third step is to add semantic value to those links. Relations defined as “related” might have a significant value for some protocols, but less value for others, “payment” might make sense for some resources, but not for others. The creation and adoption of meaningful media types allows but do not imply in client code being written in a adaptable way.

The fourth step is to create clients in a way that decisions are based only in a resource representation relations, plus its media type understanding.

All of the above steps allow servers to evolve independently of a client’s behavior.

The last step is implied client evolution. Code on demand teach clients how to behave in specific situations that were not foreseen, i.e. a new media type definition.

Note that no level mentions an specific protocol as HTTP because REST is protocol independent.
The following post will describe one example on how to create a REST system using the above maturity model as a guide.

Written by guilhermesilveira

April 13, 2010 at 9:00 am

Posted in restful

Tagged with , , , , , , ,

Restfulie at RailsConf 2010

leave a comment »

Fabio Akita, from Locaweb, is presenting a session on Restfulie and becoming truly REST in Rails.

With the help of Caue Guerra, George GuimarĂ£es, and many others, Restfulie is growing and implementing new features that we still expect from REST client apis.

For those who are going to RailsConf this year and want to create consumer and servicing systems with lesser coupling, do not miss Fabio’s talk.

Everything started when we decided to stop pretending…

Written by guilhermesilveira

February 22, 2010 at 8:55 pm

Posted in restful, ruby

Tagged with , , , ,

REST is crawling: early binding and the web without hypermedia

with 6 comments

The most frequently asked question about REST in any presentation: why hypermedia is so important to our machine to machine software?

Is not early binding through fixed URI’s and using http verbs, headers and response codes better than what we have been doing earlier?

An approach that makes real use of all http verbs, http headers and response codes already presents a set of benefits. But there is not only the Accept header, not only 404, 400, 200 and 201 response codes: real use means not forgetting important verbs as PATCH and OPTIONS and supporting conditional requests. Not implementing features as automatic 304 (as a conditional requests) parsing means not using http headers and response codes as they can be used, but just providing this information to your system.

But if such approach already provides so many benefits, why would someone require a machine-to-machine software to use hypermedia? Is not it good enough to write code without it?

The power of hypermedia is related to software evolution, and if you think about how your system works right now (its expected set of resources and allowed verbs), hypermedia content might not help. But as soon as it evolves and creates a new set of resources, building unforeseen relations between them and their states (thus allowed verbs), that early binding becomes a burden to be felt when requiring all your clients to update their code.

Google and web search engines are a powerful system that makes use of the web. They deal with URIs, http headers and result codes.

If google’s bot was a statically coded bot that was uncapable of handling hypermedia content, it would require a initial – coding time or hand-uploaded – set of URIs coded that tells where are the pages on the web so it retrieves and parses it. If any of those resources creates a new relationship to other ones (and so on), Google’s early binding, static URIs bot would never find out.

This bot that only works with one system, one specific domain application protocol, one static site. Google would
not be able to spider any other website but that original one, making it reasonably useless. Hypermedia is vital to any crawling or discovery related systems.

Creating consumer clients (such as google’s bot) with early binding to relations and transitions do not allow system evolution to occur in the same way that late binding does, and some of the most amazing machine-to-machine systems on the web up to date are based in its dynamic nature, parsing content through hyperlinks and its semantic meaning.

Although we have chosen to show Google and web search engines as examples, any other web systems that communicate with a set of unknown systems (“servers”) can benefit from hypermedia in the same way.

Your servers can only evolve their resources, relations and states without requiring client-rewrite if your code allows service-crawling.

REST systems are based in this premise, crawling your resources and being able to access its well understood transitions through links.

While important systems have noticed the semantic value and power of links to their businesses, most frameworks have not yet helped users accomplishing late binding following the above mentioned principles.

Written by guilhermesilveira

February 7, 2010 at 8:00 am

Posted in restful, soa

Tagged with , , , , ,

Scaling through rest: why rest clients require cache support

with 2 comments

It’s common to find developers struggling with their clients browser’s cache and proxies in order to get their application running as expected: some of them actually view cache options as a bad thing.

Actually http caches presents a few advantages, being the two most important amongst them all the ability to serve more clients at the same time without buying more expensive hardware (or horizontally scattering your system) and avoiding excessive bandwidth consumption where it can be saved or it is expensive.

A well known tutorial on how web caches work
was written by Mark Nottingham. Mark has also been involved with the Link header specification and developed Redbot, a clever machine that inspects your pages to avoid cache related issues you might be facing or improve your application scalability: and its everything connected to rest architectures.

Linked data is the basis for HATEOAS systems while http cache supports higher scalability using such architecture.

Imagine a theoretical scenario where a huge content provider application contains hundreds of thousands of articles that are frequently accessed in your country. Such application might have a few pages that change often, while others do not.

By adding a simple “Cache-control” header to your page, all existing cache layers between you and your client monitor will hold the resource representation in memory for one hour:


Cache-Control: max-age=7200

In Restfulie (Rails) it can be achieved by providing some cache information to your resource:


class OrdersController < ApplicationController
cache.allow 2.hours
end

Now there can be three cache systems leveraging from such header example.

The browser’s cache will use the previously retrieved representation while it does not expires, and might use it even if its expired and you did not provide the must-revalidate option. This will save you bandwidth and server cpu consumption.

A cache proxy situated within the users network, or anywhere between the server and the client machine, will serve the previously retrieved representation, saving you bandwidth outside your network and server cpu consumption.

A reverse proxy can cache the representation within the server’s network and save cpu consumption. This approach has been widely adopted in order to share cached representations amongst different consuming applications/users.

All these three savings are actually reverted in a easier to scale application, you did not need any paid middleware, any fancy stack or load balancers, although they might help: it saves you complexity, time and money.

There is much more you can do with cache headers (Last-Modified, ETag and so on) and REST libraries should make it easy to use them, appart from supporting local caches.

Finally, in syndication based systems, or in any other heavy machine-to-machine communication based one, a local cache might not be able to handle the large volume of caching hits. In such systems, it is a common approach to use distributable cache systems, and Restfulie allows you to use your own cache provider.

For example, a distributed cache like Memcached could be used by simply implementing three methods:


class MemcachedCache

def put(url, request, response)
# save it into memcached
end

def get(url, request)
# retrieves from cache, if available
end

# optional implementation
def clear
# clears the cache
end

end
Restfulie.cache_provider = MemcachedCache.new

Most used http clients implement low level features such as handling response and requests on your own, processing only the basic request headers.

The difference between http client libraries and rest client libraries is that the second should implement further http api processing, while the first allow access to the previously mentioned low level api.

And both because cache is part of the HTTP api and one of the key issues that made the web scale as we know it, Restfulie required such support out of the box (along with etag, last-modified and 304).

Not only one write less code to process the responses, but one leverages his client and server applications.

Note: I am moving my posts to our company’s blog, the next post will be just an announcement. Comments can be made either here or there.

Written by guilhermesilveira

January 26, 2010 at 8:00 am

Posted in restful

Tagged with , , ,

Quit pretending, use the web for real: the c# client

with one comment

The first post of the ‘Quit pretending, use the web for real‘ was on how one could use Restfulie for Rails to leverage its application, the second one described the same approach for a Java client and server implementation using VRaptor.

This third post is a short description/annnouncement of the Restfulie C# client which has just been released. Luiz Costa, a Caelum instructor and Sergio Junior, have developed the client which resembles Restfulie’s Rails client API due to C# dynamic nature.

You can access the representation’s elements:

dynamic order = entry.At("http://www.caelum.com.br/orders/3.xml").Get();
Console.WriteLine(order.product) ;

And navigate through links

Console.WriteLine(order.Related().name);
order.Cancel();

As usual, it’s source code can be found at github and is released with the same license. The C# client dll can be download at google code. Luiz has posted a portuguese announcement in his blog

Written by guilhermesilveira

January 14, 2010 at 9:00 am

Posted in c#, restful

Tagged with , , , ,

Transactions do not exist in a Restful world…

with 6 comments

Due to the last posts on infoq related to Restfulie, my work at Caelum Objects involved a presentation at one client, “Beginning a REST initiative” (based on Ian’s work) and the question came up: “but how do I control transactions without a custom software stack to help me?”

The answer was, “you do not need to”.

Restwiki has an old entry on how to implement transaction support through http using some non-standard http headers.

The idea was not new, as Roy Fielding mentioned on an old mail that this extra http header could be a solution and later seemed to change his mind about it, according to an infoq news.

In practice most ideas are based on a transaction being a resource named “Transaction”: an idea heavily based on HTTP and URIs, but forgetting about HATEOAS – again.

In the human web, how does one buys some products? Every product is added to the shopping basket, which then generates the order. Does the user creates a transaction before processing his order?

The human being behing the computer did not create a transaction: the browser is even unaware of that concept, but hyperlinks given by the server guided the client through this “transaction”. In this case, where the typical “REST” solution would create a “Transaction” resource and use the non-standard header to support it, a Restfulie one creates a shopping basket:

Typical “REST” approach Restfulie
sequence POST /transaction
POST /product *
POST /product *
POST /basket
POST /basket/:id/product
POST /basket/:id/product
commit POST /transaction/commit * PUT /basket/:id/payment
rollback DELETE /transaction/ * DELETE /basket/:id

* with non-standard http header

The standard way of thinking about transactions is to not use HATEOAS and believe that transactions are resources by themselves. Transactions are not resources, but a tool to implement ACID in your (i.e.) databases, not in a web system.

In our example, an order creation maps to internal transactions. In a bank example, a Transfer resource would map to the internal transaction.

By renaming the “transaction” to the real objective of that transaction, one can better map meaningful URI’s to resources.

Note that these are only the advantages of valuing the use of URIs over non-standard http headers (manifest hint?): there is no loss of visibility to layers between the client and the server.

But now one might argue that there are too many entry points. Actually, both implementations contain the same number of “entry” points if there is no hypermedia support: 4. Too many entry points should not be called “entry” points. (entry-hell pattern?)

But do we, in the human web, type in URIs as we go further with our online “transaction”? Do we type in URIs as we do a two-step flight and hotel booking process?

If the entry point POST /basket answers with a:


Header
Location: http://caelumobjects.com/basket/5
Content
<basket>
<link rel="products" href="http://caelumobjects.com/basket/5/products" />
<link rel="coupon" href="http://caelumobjects.com/basket/5/coupon" />
<link rel="pay" href="http://caelumobjects.com/basket/5/payment" />
<link rel="cancel" href="http://caelumobjects.com/basket/5" />
</basket>

Note that our basket – our transaction’s meaning – contains hints on how to operate with it and its relations pretty much in the same way that it would do in the human web: dynamically generated links that allows the server to guide the client throughout the process, eliminating the need to extra “entry-points”.

In a hotel and flight booking system, the booking POST result could be represented as:


Header
Location: http://caelumobjects.com/booking/5
Content
<booking>
<link rel="flights" href="http://caelumobjects.com/booking/5/flights" />
<link rel="hotels" href="http://caelumobjects.com/booking/5/hotels" />
<link rel="pay" href="http://caelumobjects.com/booking/5/payment" />
<link rel="cancel" href="http://caelumobjects.com/booking/5" />
</booking>

Note how the first idea on implementing transactions evolved. From a custom header which interferes with visibility and creates the need for custom built clients and layers to understand this instruction, with no server guidance at all to a system where there is no need to customize your client api or layers and the server guides the user flow through hypermedia, maturing your system.

Transactions should not be called “transactions”. The basket or transfer resource are examples of that: they are typical server side implemented transactions that should be actual resources.

Our basket (and thus transfer) seems to match Roy’s comment at that time:

  • “As far as the client is concerned, it is only interacting with one resource at a time even when those interactions overlap asynchronously.”: the basket or the transfer
  • “There is no “transaction protocol” aside from whatever agreement mechanism is implemented in the back-end in accordance with the resource semantics (in a separate architecture that we don’t care about here).”: you add products to the list of products form that basket, add some coupons and so on
  • “There is no commit protocol other than the presentation of various options to the client at any given point in the application.”: hateoas
  • “There is no need for client-side agreement with the transaction protocol because the client is only capable of choosing from the choices provided by the server.”: transaction protocol? no transaction protocol here, just a simple resource handling

Restfulie – as many other rest frameworks -already support the first step (running away from the custom header), but goes further when being “hypermedia centric”, it allows the developer to implement it without any effort.

Being opiniated and forcing the adoption of hypermedia as a way to guide or clients through out our processes might be one step ahead into more web (rest in this case?) friendly world as Ryan Riley pointed out.

HATEOAS, HTTP and URIs allow you to eliminate the concept of transaction management (and web transaction managers) from your systems as we usually think of them. There are two steps to follow:

1. there are no transactions
2. let the server guide you, do not try to guide him with multiple entry points

Written by guilhermesilveira

December 17, 2009 at 9:00 am

Hypermedia: making it easier to create dynamic contracts

with 4 comments

The human web and christmas gifts

You have been buying books at amazon.com for 5 years now: typing http://www.amazon.com in your browser, searching for your book, adding it to the cart and entering your credit card information.

But this year, on December 15th 2009 something new happens. Amazon has launched an entire new “christmas discount program” and in their front page there is a huge ad notifying their clients about this new item.

How do you react?

“Contract violated! I am not buying anything today.”

The key issue in loosely coupled systems is the ability to evolve one side without implying in any modifications on the other part.

As some Rest guys agree, hypermedia content was the factor which allowed such situations to happen in the human web without clients screaming “i don’t know what to do now that there is a black friday clearance!” or “there is a new link in this page, let me email the ‘webmaster’ and complain about it“.

In the human web, some contracts are agreed upon and validated through end-to-end tests. Some companies will use tools as selenium-rc, webdriver or cucumber to drive their tests and ensure that expected behaviour by their clients does not break with a new release of their software.

Those tests do not validate all content, though, giving space for what is called forward-compatibility: the system is free to create new functionalities without breaking previous expected behaviour.

But my rest-client is not human

In the non-human web, the most well known media type used is xml, although not hypermedia-capable. There are a couple of ways to create forward or backward-compatible schemas that check xml structures, but – unfortunately – usually
fixed schemas will not invest part of its contract in order to making it forward-compatible
: its an optional feature.

One option is to create “polymorphic” types through xsd schemas, which will get nasty if your system evolves continuously – not once every year – and you find yourself in a schema-hell situation.

One easy solution is to accept anything in too many places, which seems odd.

What are we missing then? According to Subbu Allamaraju, in RESTful applications, “only a part of the contract can be described statically, and the rest is dynamic and contextual”: you tell your client that they can believe you will not break the statically contract – you might use some schema validation to do that – and it’s up to you on the server side to not do it on the dynamic part.

Some might think it sounds too loose… let’s recall the human web again:

  • xhtml allows you to validate your system’s fixed contract
  • it’s up to you not to remove an important form used throughout the buying process

So, what are the dynamic parts of my “contract”?

In a RESTful application the contract depends on its context, which is highly affected by three distinct components:

1. your resource’s state

If a person had his application denied to open an account, your resource representation will not offer a “create_loan” request. A denied application is an information regarding its state.

While your company and application evolves, its common to find ourselves in a position where new states appear.

2. your resource’s relations

In a book store (i.e. amazon a few years ago), a book might have a category associated with it so you can access other similar books:

<book>
<name>Rest if you do not want to get tired</name>
<link rel="category" href="http://www.caelumobjects.com/categories/self-help" />
</book>
A couple years later, your system might add extra relations, as "clients which bought this book also recommend"

<book>
 <name>Rest if you do not want to get tired</name>
 <link rel="category" href="http://www.caelumobjects.com/categories/self-help" />
 <link rel="recommendation" href="http://www.caelumobjects.com/books/take-a-shower-with-a-good-soap-if-you-need-to-rest" />
</book>

When your company and application evolves, its common to find ourselves in a position where new relations appear.

3. your resource’s operations

In a REST application, your resource operation’s are represented by HTTP verbs: supporting a new one will not affect clients which use all other available verbs so far.

In the RPC/Webservices world, new operations would be implemented creating new remote procedures or services.

But how can my clients be sure that I will not break the dynamic contract?

Pretty much in the same way that you do in the human web: it’s your word.

In the human web, how do we guarantee that we will not remove or break some functionality the user expects to be there? We end-to-end automatically test its behaviour.
Our word (our tests) is the only reason to rest without worries that we will not break our client’s expectations. The same holds on the non-human web.

The dynamic contract should be throughly tested in order to not break our client’s expectations.

There are other approaches (as client-aware contracts) which might add some extra coupling between both sides.

HTTP+XML+ATOM gives us the possibility to work with both the fixed (schema validated) and dynamic (test validated) contract.

As Bill Burke pointed in a comment, “you can design your XML schemas to be both flexible and backward compatible ” and “companies, users, developers desire this contract”.

That’s the good points of using schemas, but its not everyone that use them in a flexible and backward compatible way. Even those who use might have a little bit of hard time to support it, i.e. having to maintain more than one entry point for each version of their schemas.

That’s when we can use the good points of the schema validation, as Bill pointed out, with the easy evolution advantages of a dynamic contract: as we do in the human web.

By using dynamic contracts as xml+atom following the Must Ignore rules, forward and backward compatibility is gained by default, independent on what the user does – assuming that tests are a must in any solution.

Dynamic contracts also give hints for frameworks, as they guide you on what your user can and can not do or access, but maybe not for tools, in a different fashion of what fixed contracts do: with a fixed schema I would be able to pre-generate my classes, while with dynamic schemas I the framework inject methods.

That’s why we try to take an approach which force programmers to adopt xml+atom. The entry point on the Restfulie framework is loosely evolution.

Its first example, the documentation and its examples do not focus on how easy it is to use nice URIs and the 4 most famous http verbs, but how easy it is to evolve your system using hypermedia and http: uri’s come soon afterwards.

And it seems to be working fine to far, the first developers using it in live systems have already supported hypermedia content as a way to guide clients through their systems.

Restfulie support in dynamic contracts

Matt pulver’s extension to Rails allows one to instantiate types with regards to their active record relations and attributes, but it requires every xml element to be present (strong coupling to the data structure presented by the server).

Using Jeokkarak (korean hashis), Restfulie instantiate objects matching your local data structure, supporting fields defined in your attributes and inserting extra fields for those elements unknown to your model.

For example, if you have a model as:

class Bill
  attr_accessor :value, :to_date
end

And the following xml:

<bill>
  <value>100</value>
  <to-date>10/10/2010</to-date>
  <taxes>0.07</taxes>
</bill>

The result is a dynamic object capable of answering to:

bill = Bill.from_web uri
puts bill.value 
puts bill.to_date
puts bill.taxes

If your model was ready to accept such xml, Restfulie will do the job, whilst if it doesn’t recognize the attribute, it will still be available to you.

That’s the default Restfulie behaviour: to allow the other part to evolve their dynamic contract (and even parts of the fixed one) by default, without any extra effort from your side.

Written by guilhermesilveira

December 8, 2009 at 9:26 am