RightScale Blog

Cloud Management Blog
RightScale 2015 State of the Cloud Report
Cloud Management Blog

Cloud Computing vs. Grid Computing

Cloud Computing vs. Grid Computing

Recently Rich Wolski (UCSB Eucalyptus project) and I were discussing grid computing vs. cloud computing. An observation he made makes a lot of sense to me. Since he doesn't blog, let me repeat here what he said. Grid computing has been used in environments where users make few but large allocation requests. For example, a lab may have a thousand-node cluster and users make allocations for all 1,000, or 500, or 200. Only a few of these allocations can be serviced at a time; others need to be scheduled for when resources are released. This results in sophisticated batch job scheduling algorithms of parallel computations.

Cloud computing is about lots of small allocation requests. The Amazon EC2 accounts are limited to 20 servers each by default, and lots and lots of users allocate up to 20 servers out of the pool of many thousands of servers at Amazon. The allocations are real-time, and there is no provision for queueing allocations until someone else releases resources. This is a completely different resource allocation paradigm, a completely different usage pattern, and all this results in completely different method of using compute resources.

I always come back to this distinction between cloud and grid computing when people talk about in-house clouds. It's easy to say "ah, we'll just run some cloud management software on a bunch of machines," but it's a completely different matter to uphold the premise of real-time resource availability. If you fail to provide resources when they are needed, the whole paradigm falls apart and users will start hoarding servers, allocating for peak usage instead of current usage.