Amazon Beefs Up EC2 With New Options
Posted by Bob Warfield on October 16, 2007
I’ve been a big fan of Amazon’s Web Services for quite a while and attended their Startup Project, which is an afternoon seeing what it can do and hearing from entrepreneurs who’ve built on this utility computing fabric. Read my writeup on the Startup Project for more. Amazon has been steadily rolling out improvements, such as the addition of SLA’s for the S3 storage service. Today, there is big news in the Amazon EC2 camp:
Amazon has just announced two new instance types for their EC2 utility computing service. The original type will continue to be available as the “small” type. The “large” type has four times the CPU, RAM, and Disk Storage, while the “extra large” has eight times the CPU, RAM, and Disk. The large and extra large also sport 64 bit cpus. Supersize your EC2!
Why do this? Because the original small instance was a tad lightweight for database activity with just 1.7GB of RAM while the extra large at 15GB is about right. Imagine a cluster of the extra large instances running memcached and you can see how this going to dramatically improve the possibilities for hosting large sites.
One of the neat things about this new announcement is pricing. They’ve basically linearly scaled pricing. Whereas a small instance costs 10 cents per instance hour, the extra large has 8x the capacity and costs 8×10 cents or 80 cents per hour.
What’s next? These new instances open a lot of possibilities, but Amazon still doesn’t have painless persistence for databases like mySQL. If you are running mySQL on an extra large instance and the server goes down for whatever reason, all the data on it is lost and you have to rebuild a new machine around some form of hot backup or failover. That exercise has been left to the user. It’s doable: you have to solve the problem in any data center of what you plan to do if the disk totally crashes and no data can be recovered. However, folks have been vocally requesting a better solution from Amazon where the data doesn’t go away and the machine can be rebooted intact. I was told by the EC2 folks at the Startup Project to expect 3 announcements before the end of the year that were related. I’m guessing this is the first such announcement and two more will follow.
There’s tremendous excitement right now around these kinds of offerings. They virtualize the data center to reduce the cost and complexity of setting up the infrastructure to do web software. They allow you to flex capacity up or down and pay as you go. Amazon is not the only such option. I’ll be reporting on some others shortly. It’s hard to see how it makes sense to build your own data center without the aid of one of these services any more.