RightScale Blog

Cloud Management Blog
RightScale 2014 State of the Cloud Report
Cloud Management Blog

Setting Up Sites on EC2 with RightScale

The key to a successful site setup on Amazon EC2 are scalability and redundancy. RightScale makes this easy by providing ServerTemplates and multi-server deployments. To get started, let's take the simplest case: a single-server setup. We have a free Rails all-in-one ServerTemplate that is excellent not just to play around with, but also to use as development server, staging server, or even as production server for small sites that don't need more horsepower or much redundancy.

Single Server Site

Our Rails all-in-one is described in more detail elsewhere, but you see on the right what's involved: it runs Apache as a reverse proxy in front of four Mongrel/Rails processes all backed by a simple MySQL installation. Last, but not least, we set up cron jobs that run a mysqldump every 10 minutes to Amazon S3 so you have your data safe in case the instance dies unexpectedly. Apache in the front can be set up to serve up static and cached pages, it can do HTTP and/or HTTPS, it can canonicalize the hostname (e.g. redirect http://mysite.com to http://www.mysite.com), and it can serve-up a maintenance page while you're updating your app. Oh, of course Apache load balances across the four Mongrels too!

Redundant Site

Ready for more? You're almost ready to launch for real, you expect some traffic soon and don't want to be reliant on a single server anymore. Time to upgrade to a fully redundant site architecture using four servers.

Site - Redundant Setup

The setup almost all our customers use consists of two front-end servers and two back-end database servers giving us full redundancy. We use this ourselves for the RightScale site itself! Let's walk through the setup from beginning to end.

It all starts when a user types http://www.mysite.com into the browser. The browser does a DNS lookup and gets two IP addresses which are the public IPs of the two front-end instances. The browser picks one and tries to connect. If it fails, it rather quickly tries the other, this gives you the fault tolerance you need in case one of the instances dies or has other problems. Also, having multiple IP addresses for your site is the only form of fail-over that browsers support, see this page for additional details.

The first thing the request from your browser hits is Apache, which has the same roles as in the all-in-one server: dealing with SSL, canonicalizing the hostname, serving up static files, putting up a maintenance page, and anything else you might want a full-fledged web server for. For requests destined to your application, Apache acts as a reverse proxy and forwards the request to HAproxy on the same machine.

HAproxy is a very nice piece of software that proxies and load balances requests to back-end servers. We use it for HTTP here, but it can also do plain UDP and TCP load balancing, for example for DNS or mail servers. We chose HAproxy because it has good support for health checks and the ability to redirect requests to alternate servers if a back-end fails mid-way. HAproxy is set up to send a request to each back-end process (Mongrel/Rails in our example) to ensure that it's running properly. It then only forwards requests to servers that respond. While Apache can do load balancing across multiple back-end servers as well using mod_balance_proxy it does not include health checks. What this means is that when a sever goes down it has to send live customer requests to it every few seconds to see whether it has come back up. This means that while any Mongrel process is down on any server your customers are going to be impacted because some of their requests are being sent into a black hole. Not nice...

HAproxy forwards the request to one of the Mongrel/Rails processes on either of the two servers. Load balancing across both servers is nice because it means that you can shut the Mongrels on one server down to update the code without impacting customers at all.

Everything on the front-end servers is open source software except for your application. So we need a way you can get you app code onto the instance at boot time, and a way you can update the code. Note that for major upgrades we always recommend to launch fresh instances so you keep the old ones around for a day, just in case you want to switch back. (Hey, that's really cheap insurance at only $2.40 per day per server!) We provide two different RightScripts to do minor code updates: one pulls the code from a tarball located on S3, the other does an svn export from your subversion repository. We recommend the S3 route for production use because else starting new servers depends on the availability of your svn repository and often the svn export is the slowest portion of the entire instance boot process. But sometimes the svn route is just so much more convenient, especially if you're playing with a test setup where you change the code frequently. In addition, for Rails, we set up the app code directory structure the same way capistrano does, so you can point your capistrano config file at your instance and do a "cap update". Again, something we don't recommend for production servers but really handy for test and dev boxes.

Behind the front-end servers we place two replicated MySQL instances managed through our Manager for MySQL with backups to Amazon S3. We use frequent backups from the slave server where the load of the backup itself doesn't affect production and daily backups from the master as added security.

Scalable Redundant Site

For a fully redundant and scalable site we recommend an architecture that is a natural extension from the four-server setup using more of the same components. We basically add a number of Mongrel/Rails application servers and hook them into the load balancing rotation on the two front-end servers. This array of app servers can now be expanded and contracted as warranted by the load on the web site: expand to handle surges in traffic when your PR and marketing lands a success, contract at night when the load on your site goes down and you'd rather hold on to your $$. The wonderful thing is that with this setup you are paying for the average cost of your hosting needs, not for a once-a-month peak. Scalable Setup

If you look closely we're running the app server on the two front-end load balancing instances. We find that the load balancing takes very few resources and that there's room for some application cycles. Using HAproxy it's easy to have less traffic go to the local app servers than to remote dedicated instances. The reason we keep the app on the front-end instances (as opposed to switching to pure load balancing instances) is that this way there are always two app servers available even if the array is scaled back to zero servers. Or put differently, when your site is under minimal load at 4 a.m. it scales down to four instances as opposed to six. If the load-balancing or serving of static files becomes a significant load, it is of course possible to switch of the app serving on the front end or, alternatively, to add two additional from-end load balancing instances.

The way we currently handle the changes to the load balancer config when servers come online is to automatically edit the config file using operational RightScripts and do a seamless restart of HAproxy which ensures that no connections are dropped in the change.

If you are interested in using our site setups please don't hesitate to try out the free Rails all-in-one ServerTemplate and please contact us for more at sales@rightscale.com. The multi-server setups are not available in prepackaged form with the free RightScale accounts.


Archived Comments

kai say a site requires HTTPS. looking at the pic, is everything between Apache and Haproxy, Haproxy and Mongrel, and Mongrel and MySQL not encrypted? What’s your solution for encrypting all the sensitive data? Security can only be as good as how you deal with the weakest link.

Thorsten Kai, you are absolutely correct. At the same time, security is a relative concept, there is no absolute security. The back-end traffic is “secured” by Amazon’s network configuration and security groups firewall system. For many sites it is acceptable to have the back-end be unencrypted because the threat is crossing the internet and specially wifi or similar networks at the client end. We would prefer to have a way to re-encrypt the back-end communication, but at the moment this is not so easy to do given the software load balancers out there (if you have a suggestion, we’d love the hear it). I wish we could drop a netscaler load balancing box into EC2, but I don’t see that happening! The interim solution we’d use if a customer asked for it is using HAproxy in a TCP balancing mode where it connects TCP streams through to the back-end server. This would connect the SSL connections all the way through to the app server. We’d then have to put the mysql connections through encrypted tunnels to secure that part as well. All this is perfectly doable but it’s getting awfully close to requirements where the outsourced nature of EC2 may not fit the bill no matter how many encryption layers you use.

Comments

Does your 4-instance architecture support fail-over for the MySQL instances? If the master goes down, will the applications immediately switch over to the slave?
Posted by D. Smith (not verified)   Ι   October 29, 2009   Ι   12:22 AM

Post a comment