Posts Tagged ‘restfulie’
Expecting specific results means coupling your client to a servers behavior, something that we want to minimize.
When there is a required item
And there is a basket
But didnt desire anything
Then desire this item as first item
When there is a required item
And there is a basket
Then desire this item
When it is a basket
And there are still products to buy
Then start again
When there is a payment
Then release payment information
The above code works and can be seen running in the last video from this post. In this Rest modelled process, several types of evolutions can happen on the server, without breaking the client:
1. After adding a product to the basket, if the server renders recommendations instead of your basket, the client still achieve its goal.
2. If some products are not found, the client is still able to pay for it.
3. If the server does not have the products, but links to another system in a affiliated program, we as clients will never notice it, and buy the products.
4. If we want to achieve the same goal in other systems that understand the same media type that we do, simply change the entry point..
Note the benefits from backward compatibility, forward compatibility and we are still capable to achieve our goals without knowing the existence of some changes on the protocol. We can even compare prices without changing one line of code, everything using well known media types as Atom. RDFa support can also be implemented.
This evolution power is not achieved in traditional process modelling, and several times, compatibility is left aside. This is something that REST and hypermedia brought us.
Now let’s imagine the traditional modelling:
In a typical web services + business processing scenario, how would a server notify me to access another system without changing one line of code in my client? It would break compatibility.
If the server now supports a limited quantity of items in your basket, your client will also break, while the adaptable REST client adapts itself and buys what it is capable of.
If our produt or company uses those traditional paths and every change on the service implies in heavy change costs for clients, thats a hint that your project is too much coupled, and a REST client as mentioned might help.
Tradicional web services decouple a little bit more than traditional RPC, but still provide this amount of coupling that cost our companies more than they were expecting to pay.
The entire video cast on the 3rd part from Rest from scratch is here:
This code achieves its results using Restfulie 0.8.0, with many thanks to everyone who contributed maturing this release: Caue Guerra, Luis Cipriani, Éverton Ribeiro, George Guimarães, Paulo Ahagon, Fabio Akita, Fernando Meyer and others.
Is your client api code already adapting itself to changes?
(update: changed some of the “Then” clauses from the dsl for easier spotting of document messages instead of commands)
This 20 minutes video shows how to move from a basic REST api to one which makes use of linked resources and adds semantic values to those links.
Did your REST api do that already? Great.
Otherwise, it’s time to move ahead and decouple a little bit further your clients from your server.
On this 10 minutes video you will learn how it is possible to make several representations of one resource available on the server side (xml, json, atom and so on) and one line of code access and parse it on the client side.
This video shows how to access those basic REST api’s as Twitters, CouchDB which are based on http verbs and multiple URIs. The following videos shall demonstrate how to access more advanced REST API’s.
If you have any questions, join us at our mailing list.
REST was a research result that left us with an open question, as its researcher suggested: it beautifully solves a lot of problems, but how to apply it on contemporary concerns that enterprise have?
The following video shows an example on how to start from a typical non restful architecture to adopting REST constraints and creating a buying process in any REST server.
So what is the power behing applied REST?
“Rest Applied” as I have exemplified, solves our contemporary concerns, filling the gap between Roy’s description and application’s usage, opening up a new world of possibilities.
The same way that REST ideas, although they were not called REST at that time, allowed web crawling to be an amazing client, “REST applied”, as described, can change the way our applications communicate with servers.
Why did we miss it? Because Roy’s description goes for with crawling examples, which benefit directly from content type negotiation. i.e. different languages, same resource and google post ranking it:
“In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content .”
But, “Not surprisingly, this exactly matches the user interface of a hypermedia browser. “… the client adapts itself to its current representation – limited to the client’s cleverness.
Restfulie gives better HTTP support to http libraries and provides a REST frameworks while Mikyung allows you to create your REST clients. With both of them you are ready to apply REST to enterprise problems.
Mikyung stands for “beauty capital” in Korean, in a attempt to reproduce what a beautiful rest client could look like when following REST constraints.
For those who are going to RailsConf this year and want to create consumer and servicing systems with lesser coupling, do not miss Fabio’s talk.
Everything started when we decided to stop pretending…
The most frequently asked question about REST in any presentation: why hypermedia is so important to our machine to machine software?
Is not early binding through fixed URI’s and using http verbs, headers and response codes better than what we have been doing earlier?
An approach that makes real use of all http verbs, http headers and response codes already presents a set of benefits. But there is not only the Accept header, not only 404, 400, 200 and 201 response codes: real use means not forgetting important verbs as PATCH and OPTIONS and supporting conditional requests. Not implementing features as automatic 304 (as a conditional requests) parsing means not using http headers and response codes as they can be used, but just providing this information to your system.
But if such approach already provides so many benefits, why would someone require a machine-to-machine software to use hypermedia? Is not it good enough to write code without it?
The power of hypermedia is related to software evolution, and if you think about how your system works right now (its expected set of resources and allowed verbs), hypermedia content might not help. But as soon as it evolves and creates a new set of resources, building unforeseen relations between them and their states (thus allowed verbs), that early binding becomes a burden to be felt when requiring all your clients to update their code.
Google and web search engines are a powerful system that makes use of the web. They deal with URIs, http headers and result codes.
If google’s bot was a statically coded bot that was uncapable of handling hypermedia content, it would require a initial – coding time or hand-uploaded – set of URIs coded that tells where are the pages on the web so it retrieves and parses it. If any of those resources creates a new relationship to other ones (and so on), Google’s early binding, static URIs bot would never find out.
This bot that only works with one system, one specific domain application protocol, one static site. Google would
not be able to spider any other website but that original one, making it reasonably useless. Hypermedia is vital to any crawling or discovery related systems.
Creating consumer clients (such as google’s bot) with early binding to relations and transitions do not allow system evolution to occur in the same way that late binding does, and some of the most amazing machine-to-machine systems on the web up to date are based in its dynamic nature, parsing content through hyperlinks and its semantic meaning.
Although we have chosen to show Google and web search engines as examples, any other web systems that communicate with a set of unknown systems (“servers”) can benefit from hypermedia in the same way.
Your servers can only evolve their resources, relations and states without requiring client-rewrite if your code allows service-crawling.
REST systems are based in this premise, crawling your resources and being able to access its well understood transitions through links.
While important systems have noticed the semantic value and power of links to their businesses, most frameworks have not yet helped users accomplishing late binding following the above mentioned principles.