Sergio, thanks for the thoughtful comment. You are correct that the grid community has worked a lot on tying resources at multiple institutions together such that users can run computations that span resources across administrative domains. At the current stage such an effort has not even really begun in the cloud space. The players have not really been confronted with this type of request, as far as I know. I don't know that this is a fundamentally difference, it seems more like a development stage where cloud vendors haven't started to coordinate how they expose resources and make them compatible with one another. I would disagree that the size of allocation isn't an important factor. As you write, the grid world revolves around batch job scheduling. This is a fundamentally different paradigm from the real-time allocation that cloud offer. The control over your system and the type of systems that can be deployed are vastly different and fundamentally affect the way the applications are written as well as the way they are managed. As I've mentioned many times, the most fundamental principle in cloud computing is being able to bring the next server from boot into full production on auto-pilot. This solves many problems, from repairing failures, to scaling up and down with load, to launching additional deployments for special purposes (staging, demo, test, etc). The notion that when you need additional resources you go and get them is different. It doesn't exist in batch processing. It doesn't even cross your mind. Programmers are used to fork a thread or a process, now they can fork a server.