Elastic Computing

 Posted by at 3:52 pm  Web
Oct 222010
 

Maarten Balliauw wrote a nice piece (with lots of pretty pictures and diagrams ;> ) about using Azure cloud infrastructure to take up excess load during periods of peak traffic beyond what your in-house hardware can handle.  The idea is you can run your server app on your existing hardware most of the time, but in periods of peak load you can spin up additional server resources in the Azure cloud and shunt your excess load (or all of it) to the Azure cloud until the peak load subsides.

This is often called elastic computing, referring to cloud computing’s ability to increase or decrease the number of nodes in use more or less on the fly, usually in response to fluxuations in demand.  Pay for the cloud services when you really need them, then turn them off and pay nothing when you don’t need them. 

There are two small problems with the plan as described by Maarten: 

  1. Getting your data into the cloud. If your web app doesn’t require enormous data on the server, you can probably get by with copying data up to the cloud as part of the transition script and copying it back when returning the service to running in your own data center after the peak load has passed.  The problem here is, copying all that data will take time, and lengthen the time to transfer execution of your service to the Azure cloud.
    Another option would be to give your cloud app access to your internal databases using IPSec secured back-links into your private network datacenter (codenamed Sydney).  In this mode, you’re using the cloud as your web front-end to handle the brunt of your peak load.  You’ll need to verify that your backend is not the bottleneck in peak load conditions for this to work.
  2. Azure takes WAY too long to spin up a new hosted service for on-demand elastic computing to really work. Azure take 15 to 20 minutes to spin up a new hosted service, regardless of its size or complexity.  Large or complex deployments will take longer to spin up. 

When your internal server monitors start to notice your internal servers are having trouble keeping up with inbound web requests, can you really survive for 30 minutes until you can start shunting traffic to a newly deployed Azure service? Doubtful.  Many a website has been brought down in 10 minutes by a mention on Slashdot or MSN.

What might work better for dynamic peak load shaving is to keep one instance of your web app running in Azure – a hot standby – and ramp up the number of available nodes by issuing an update to the service config when you need more compute power.  Azure can update the configuration of an existing service much faster than it can deploy a completely new service.

There are other elastic computing scenarios that work much better with Azure.  If your service is only used during certain well-known periods of the day (live stock market analysis 9am-4pm ET, for example) you can automate shutting down the service during periods of non-use and starting it back up again before the workday begins, and reduce your hosting costs by nearly 50% compared to running the service all day and all night. 

If you can anticipate peak load periods, you can spin up your Azure cloud service before the wave hits.  For example, Amazon.com has observed surges in web activity on their retail site that correspond directly to lunch time in the Eastern, Central, and Pacific time zones, and again right around end of the workday in each timezone.  It would be pretty straightforward to scale up your nodes in anticipation of these well known, predictable peak load periods, and scale them back afterwards. 

The key point is that this type of scaling happens on a schedule, completely independent of actual real-time load analysis, so Azure’s long boot up time is not an issue.

Sorry, the comment form is closed at this time.