Amazon Web Services Continues to Mature, Google to Follow Soon?
Posted by Bob Warfield on November 7, 2007
Amazon recently announced the availability of S3 in Europe, something that customers have been clamoring for. This is both to reduce latency for US-based companies and to make it easier for European companies to embrace the service. Presumably Amazon themselves have datacenters in all sorts of nifty places, and an East Asian center would be a worthwhile next step as they continue to roll out the service. This new capability works with a “local constraint” that identifies where the S3 storage bucket is to be located. The default remains US datacenters.
When I attended Amazon Startup Project, there was a lot of interest among developers present for some ability to gain a little control over exactly where the machine resources they were purchasing from Amazon materialized. This manifested at a couple of levels. First was the ability to access multipel data centers for redundancy. The “local constraint” option could be expanded to include East and West Coast US locations, for example. The second request was an ability to specify that machines could be in the same datacenter, but that they ought to be on separate racks, again to increase resilience in the event of a failure that impacts the whole rack. Note that S3 already has a lot of this kind of redudancy built in, and it is more EC2 (the ability to buy raw Linux machines) that we’re dealing with here. I can imagine Amazon will get to all of this in the fullness of time. The requests are not unreasonable nor should they be all that difficult to implement.
Meanwhile Red Hat has announced SaaS pricing for their Red Hat Linux when offered on EC2, an interesting development. It sounds like a good thing, but I’m still trying to decide whether their pricing makes sense. It’s $19/month per user plus 21 to 94 cents per compute hour. In exchange you get Tech Support and access to all the RHEL (Red Hat Enterprise Linux) apps. My problem is they want to charge $19/month per user plus an additional hourly charge that’s as much as Amazon wants for the EC2 hardware itself (at least the “small” configuration). That sounds like a lot, particularly the “per user” piece. Perhaps they meant to say “per server”, but that isn’t how the release is worded. We’ll have to wait and see if there is clarification later. The bottom line on this is that systems software companies are starting to take notice and view Amazon as another platform to support.
Speaking of systems software, we’re still waiting to see a persistent database solution. When I attended Amazon Startup Project, they mentioned some sort of persistent database support would be available by end of this year, and at least one of the entrepreneurs who spoke said they were beta testing the solution. This is a gaping hole in their offering, and one I’m sure they’ll be filling as soon as they can. What remains to be seen is whether they offer a solution developed by Amazon, or whether a partner steps up. For example, mySQL could offer something along the lines of what Red Hat is doing. In looking at the Red Hat pricing, and thinking about some of the things I’ve heard about mySQL (1 in 10,000 “customers” actually pays for mySQL), I wonder if this sort of thing doesn’t provide an opportunity for these vendors to deal themselves a new hand in terms of how they deal with customers. It wouldn’t be the first time a transition to a SaaS model radically changed the rules. We’ll have to see how it all works out.
Meanwhile, Amazon keeps ticking along with a pretty good pace of announcements around the service. Recall we’ve gotten bigger servers, and SLA’s, two things that were much in demand around the time I went to Startup Project. On the SLA front, Amazon is coming through with flying colors. Read/Write Web tells us they’re hitting four 9’s, which is an extra “9” on what their SLA’s promise. That’s a solid number that even big companies struggle to achieve.
There are two areas of challenge that I’ll be interested to watch as events continue to unfold. First, there remains a suspicion that Amazon Web Services are largely a remaindering service and that they aren’t even trying to make money with it, but rather recover costs on low server utilization for the rest of their business. If this is true, then at some point service levels will degrade as the excess capacity is used up and Amazon fails to invest in keeping ahead of that curve. While it’s possible that this is the case, I’m skeptical. I think their current pricing actually does let them make money. There is certainly a fair amount of premium being charged relative to other hosting services. They must have relentlessly driven service costs down and invested in nearly total automation of the infrastructure. If the service is profitable or nearly profitable, then we can count on them to keep investing in it as it grows.
This brings me to my second thought. So far, Amazon has largely focused on delivering capabilities they had to build for their core business anyway. At some point, the average customer’s needs will deviate from Amazon’s view of how web architectures should be built. They’ve said, for example, that there are no plans to offer their keyed virtual storage system Dynamo as a service. There may be a lot of reasons for this. Ironically, it may not be multitenant, so they may fear opening it up is too risky for the overall business: not enough isolation between tenants. An alternative view may be that for all but the very largest of web sites, a service like Dynamo is just not necessary. Most sites want mySQL or the equivalent. Whether we choose to view that choice as enlightened or not is immaterial. Many very large web sites get built around vanilla relational technology and they wind up working fine without anything exotic like Dynamo.
My question on all of that is how much further will Amazon go? If they’re just milking technologies they’ve built that have broad applicability, they will have a decision to make. That decision is how heavily to invest in technologies to be delivered via Web Services that have no benefit for the rest of Amazon’s business. My guess is they’ll go slowly on such investments, preferring to see partners develop the technologies. We’ll see an interesting first test when we see what happens around persistent database support.
All in all, the service continues to have a bright future. People who want to directly equate the raw cost of servers to the cost of on-demand utility computing are not making an apples to apples comparison in my mind. They miss a lot of the benefits that services like Amazon offers that are just not available in a raw server or even a virtual appliance setting. Businesses have to decide how much they value those services, but early indications for S3, Amazon’s highest value-add service, are very positive. If you don’t see the additional value added, go with a different alternative. There are many options in today’s market, with more all the time.
Speaking of more all the time, I’ve been hearing rumblings that Google may announce their equivalent shortly. I’m all ears for that one! I will be interested to see if it’s more Google vaporware (i.e. you won’t really be able to use it until end of 2008) or if it’s something that’s ready to go immediately.
Update on Red Hat: the add-on pricing is per server, not per user, as I had speculated.