Archive for the ‘Uncategorized’ Category
Today I was talking to Jose Valim, about hypermedia and how links could be represented in many ways using several different media types.
One of the most common question that appears on rest-discuss every now and then is why atom, and similar, links do not include an attribute that tells the client which verb should be used to access that resource, as html does:
form action="..." method="post"
Actually, html uses the method attribute in a form not to tell how to access a resource: a resource is always retrieved through GET, always created through PUT and POST, always deleted through DELETE.
Html uses that attribute to allow the server notifies which parameters are necessary and which media type to use to retrieve (GET) or create (POST) a resource.
We typically think that the verb is necessary because otherwise clients would not know that they are supposed to do a post, i.e. for publishing a blog entry:
link href="/posts/1/publish" rel="publish" verb="POST"
That’s because the link relation identifies an action, not a resource. Thinking about resources and sticking to the uniform interface when it comes to verbs, we would have:
link href="/posts/1/publish" rel="publication"
It is quite clear that POSTint to such URI would create something, while GETting would retrieve.
In those cases, there is no need for explicit verb declaration on the link (or form).
There *might* be a need when specifying query parameters or writing input elements within a form.
For those who are coming to Baltimore for Railsconf, Fabio Akita will present a session on Bringing more Rest to Rails with Restfulie. If you are up for a talk on REST join the session and meet us there or during the event.
“It is something that we believe is worth exploring with the goal of understanding how it will affect the technology impacted dimensions of your enterprise”.
“It is an empirical proof that the web and hypermedia can be used to orchestrate complex business activities”
There are a few videos available at vimeo for those interested in learning how to use. Jim Webber and myself are giving a talk on REST May 13th in Sao Paulo and Fabio Akita is presenting Rest using restfulie at Railsconf 2010 later next month.
Congrats to the entire commit team (Guilherme Silveira, Caue Guerra and George Guimaraes) and all contributors, specially everyone at Abril Digital who has been contributing a lot of atom related code.
This will be my final post in this blog for a while.
All posts were moved to blog.caelumobjects.com and I will keep posting over there.
This is a short post for those who, like me, work offline at several projects during your weekend.
We have released a small command line tool that helps you sync all your master branches at once, by just running one command, all your repositories will be pulled again (and come back to your current branch, no automatic rebase)
More info at scmall website.
You can configure the files to work stashing data or even with svn if you wish.
Some recent posts here and at our company’s blog were dealing on how to use the web as an infrastructure for distributing an algorithm, but this post is related to how to supply a service or resource based system using some of the web infrastructure.
Last year, at Falando em Java 2009, Jim Webberr spoke about HATEOAS, Microformats and how hipermedia content could add to a servicing system.
Based on his upcoming book and some of the ideas presented on his blog, I have created a small project with the help of Adriano Almeida and Lucas Cavalcanti at Caelum to help others explore some of those topics in a easy way.
Two simple projects show off what hypermedia content can do to simple resource/service based applications.
The server application is responsible for a small and simple ordering system where one:
1. sends an order (post)
2. requests it’s status (get)
3. sends an update (post)
4. cancel it (delete)
5. pay it (post)
Actions 3-5 are made available through action 2: by getting the resource representation, one can parse it and find its available actions – leaving it to a human-programmed software to request the next step in its flow or, for those who see flow automation as a good thing, for an automatic generic client to decide whats the next step.
The client application is responsible for being a “generic client”. But a generic client can be something harmful too so its one designed for those users who want to test their servers and learn how those resources work.
In order to try out the applications, you can use the google app engine infrastructure and follow the steps below:
Every resource/service based application has a single or set of entry points. From the point of view that such integrated systems should be programmed by humans (and not entirely automated), the consumer application should be aware of the entry point and its http verb, therefore:
Access http://restful-client.appspot.com/ and try to post an order to:
a) entry point: http://restful-server.appspot.com/order
b) method: POST
c) name: content
d) content: any order content (it will not be validated)
This is a well-known entry point and both client and server application are aware and agreed on that. By following that mindset, one will have a set of those entry points that are aware for both sides.
Those who look for fully automated services or systems will probably create a single starting point containing a GET request which results in the set of former entry points.
Running the consumer application
PAY ATTENTION: you might need to run it twice when you first run it. google app engine has a slow startup time that
might run a timeout prior to succeeding.
After posting the order, you will receive a 201 response with your order location.
You can then view (GET) this order to see its representation. By parsing the representation, one can discover what you can do with your order:
– pay (request a payment POST)
– cancel (request an order DELETE)
– update (request an order UPDATE)
– view (request an order GET)
Anytime you can check your order status by getting it. After payment your status will change to PREPARING and after one minute it will be DONE. If you cancel your order, it will be CANCELED.
At the same time, a ruby version of the server side library supporting services-aware-resources will be released here soon. The java version is also being built around vraptor already. The next step is to build a ruby library – due to its dynamic nature, one can create really nice resource representations on the client side of what we just received from the server.
Those interested in helping us are welcome to mail me.
Although its a simple server application, it is a good way to show how to use the web as an infrastructure in a little bit more than the (rest-non-ful) CRUD examples found on the web.
It has been a while since we started using unit tests (and other types of tests) in our projects. But test driven design has always been something that once in a while I feel unskilled enough to do from start!?
Many people (including myself), for many reasons, that TDD is the way to go… but what happens when I have no clue on what I am building?
In the last 3 open source projects that I have worked at, only one started with TDD since its conception. 2 of those projects are actually tools (TestSlicer and VRaptor) while the other one is an continuous integration server.
The first project is about running integration builds faster by running only the required tests. In other words, it should only run the tests affected by the change log.
The problem with creating this tool is that, while coding it for the first time, it is so unclear how it will work or what it will exactly do that it was impossible to test it prior to creation. The first attempt was to use TDD and some code was created. After a few days, it was clear that the way the tool was going to achieve its purpose was way unclear in order to create integration tests for it. Some days afterwards everything was even more clear:
- unable to keep up coding it due to the lack of more advanced tests
- it was possible to create such a tool
After the first version was used in production, the conclusion was that it was a great approach to use and drop and re-use TDD in this project, because the idea was way so unclear that it would require anyway a complete coding of the project from scratch – again. Due to the very early stage of the project and Its purposes and ideas evolving too fast in a short period, it felt/was counter-productive to keep tdd-ing.
VRaptor started from scratch with TDD and went just fine. We all knew its purpose and had a somehow clear vision of what we desired (a refactor friendly framework), not knowing exactly how to implement it – but in the end, achieving it. TDD win.
The third project suffered from the same problem that the first one had. We just had a short (unclear) glimpse of what we wanted: “run all our tests in parallel” instead of “running our builds in parallel“. But how?
Should it be the job of our agent machines to consume what our servers make available? Or should the servers manage the agents (as cruise) to do their job? Should it be implemented through low-level sockets or http based resources? Everything was so unclear and changed so fast in the first couple of days that it was impossible to test first, code afterwards at that time.
After the first trial on a private project, it was clear how to and even more clear what we wanted to achieve, so it was time to refactor and start TDD’ing.
This is the common feeling that I have found about TDD bugging people</a… whenever your project is a prototype to check that something is possible of doing, or you are just creating something completely new that you have no idea what it is, it sounds you should first create the prototype, throw it away and restart it with TDD.
Maybe typical web-based app’s won’t suffer of this problem because sprint plannings will help getting things clear in the developer’s mind. But developing a library or a tool for other developers it not the same type of task. At least during the first few moments…