SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

How Does Virtualization Impact Hosting Providers? (A Secret Blueprint for Web Hosting World Domination)

Posted by Bob Warfield on August 16, 2007

I’ve written in the past about data centers growing ever larger and more complex in the era of SaaS and Web 2.0.  My friend Chris Cabrera, CEO of SaaS provider Xactly, recently commented along similar lines  when asked about the VMWare IPO. 

Now Isabel Wang who really understands the hosting world has written a great post on the impact of virtualization (in the wake of VMWare’s massive IPO) on the web hosting business.  I took away several interesting messages from Isabel’s post:

          Virtualization will be essential to the success of Hosters because it lets them offer their service more economically by upping server utilization.  It’s an open question whether those economies are passed to the customer or the bottom line.

          These technologies help address the performance and scalability issues that keep a lot of folks awake at night.  Amazon’s Bezos and Microsoft’s Ray Ozzie realize this, and that’s why they’re rushing full speed ahead into this market.  They’ve solved the problems for their organizations and see a great opportunity to help others and make money along the way.

          The market has moved on from crude partitioning techniques to much more sophisticated and flexible approaches.  Virtualization in data centers will be layered, and will involve physical server virtualization, utility computing fabric comprised of pools of servers across multiple facilities, applications frameworks such as Amazon Web Services, and Shared Services such as identity management.  This complexity tells us the virtualization wars are just beginning and VMWare isn’t even close to looking it all up, BTW.

          This can all be a little threatening to the established hosting vendors.  Much of their expertise is tied up in building racks of servers, keeping them cool, and hot swapping the things that break.  The new generation requires them to develop sophisticated software infrastructure which is not something they’ve been asked to do in that past.  It may wind up being something they don’t have the expertise to do either.  These are definitely the ingredients of paradigm shifts and disruptive technologies!

We’re talking about nothing less than utility computing here, folks.  It’s a radical step-up in the value hosting can offer, and it fits what customers really want to achieve.  Hosting customers want infinite variability, fine granularity of offering, and real-time load tracking without downtime like the big crash in San Fran that recently took out a bunch of Web 2.0 companies.  They want help creating this flexibility in their own applications.  They want billing that is cost effective and not monolithic.  Billing that lets them buy (sorry to use this here) On-demand.  After all, their own businesses are selling On-demand and they want to match expenses to revenue as closely as possible to create the hosting equivalent of just in time inventory. Call it just in time scaling or just in time MIPS.  Most of all, they want to focus their energies on their distinctive competencies and let the hoster solve these hard problems painlessly on their behalf.

When I read what folks like Amazon and the Microsofties have to say about it, I’m reminded of the Intel speeches of yore  that talked about how chip fabs would become so expensive to build that only a very few companies would have the luxury of owning them and Intel would be one of those companies.  Google, for example, spends $600 million on each data center.  Big companies love to use big infrastructure costs to create the walls around their very own gardens!  Why should the hosting world be any different?

The trouble is, the big guys also have a point.  To paraphrase a particular blog title, “Data centers are a pain in the SaaS”.  They are a pain in the Web 2.0 too.  Or, as Amazon.com Chief Technology Officer, Werner Vogels said, “Building data centers requires technologists and engineering staff to spend 70% of their efforts on undifferentiated heavy lifting.”

Does this mean the big guys like Amazon and Microsoft (and don’t forget others like Sun Grid) will use software layers atop their massive data centers to massively centralize and monopolize data centers?  Here’s where it gets interesting, and I see winning strategies for both the largest and smaller players.

First, the big players worry about how to beat each other, not the little guys.  Amazon knows Microsoft will come gunning for them, because they must.  Can Amazon really out innovate Microsoft at software?  Maybe.  The world needs an alternative to Microsoft anyway.  But the answer when competing against players like Microsoft and IBM has historically been to play the “Open System vs. Monolithic Proprietary System” card.  It has worked time and time again, even allowing the open system to beat better products (sorry Sun, the Apollo was better way back when!).

How does Amazon do this to win the massive data center wars?  It’s straightforward:  they place key components of Amazon Web Services into the Open Source community while keeping critical gate keeping functions closed and under their control.  This lets them “franchise” out AWS to other data centers.  If you are a web hoster and you can offer to resell capacity that is accessible with Amazon’s API’s, wouldn’t that be an attractive way to quit worrying so much about it?  Wouldn’t it make the Amazon API dramatically more attractive if you knew there would be other players supporting it? 

Amazon, meanwhile, takes a smaller piece of a bigger pie.  They charge their franchisees for the key pieces they hold onto to make the whole thing work.  Perhaps they keep the piece needed to provision a server and get back an IP and charge a small tax to bring a new server for EC2 or S3 online in another data center.  How about doing the load balancing and failover bits?  Wouldn’t you like it if you could buy capacity accessed through a common API that can fail over to any participating data center in the world?  How about being able to change your SaaS datacenter to take advantage of better pricing simply by reprovisioning any or all of the machines in your private cloud to move?  How about being able to tell your customers your SaaS or Web 2.0 offering is that much safer for them to choose because it is data center agnostic?

BTW, any of the big players could opt to play this trump card.  It just means getting out of the “I want to own the whole thing” game of chicken and taking that smaller piece of a bigger pie.  Would you buy infrastructure from Google or Yahoo if they offered such a deal?  Why not?  Whoever opens their system gains a big advantage over those who keep theirs monolithic.  It answers many of the objections raised in an O’Reilly post about what to do if Amazon decides to get out of the business or has a hiccup.

Second, doesn’t that still mean the smaller players of less than Amazon/Google/Microsoft stature are out in the cold?  Not yet.  Not if they act quickly, before the software layers needed to get to first base become too deep and there are too many who have adopted those layers.  What the smaller players need to do is immediately launch a collaborative Open Source project to develop Amazon-compatible API’s that anyone can deploy.  Open Source trumps Open System which trumps Closed Monoliths.  It leverages a larger community to act in their own enlightened self-interest to solve a problem no single one of these players can probably afford to solve on their own.  Moreover, this is the kind of problem the Uber Geeks love to work on, so you’ll get some volunteers.

Can it be done?  I haven’t looked at it in great detail, but the API’s look simple enough today that I will argue it is within the scope of a relatively near-term Open Source initiative.  This is especially true if a small consortium got together and started pushing.  One comment from that same O’Neil blog post said, “From an engineering standpoint, there’s not much magic involved in EC2.  Will you suffer for a while without the nifty management interface? Sure. Could you build your own using Ruby or PHP in a few days? Yep.”  I don’t know if it’s that easy, but it sure sounds doable.  By the way, the “nifty management interface” is another gatekeeper Amazon might hold on to and monetize.

But wait, won’t Amazon sue?  Perhaps.  Perhaps it tips their hands to Open Source it themselves.  Legal protection of API’s is hard.  The players could start from a different API and simple build a connector that lets their different API also work seamlessly with Amazon and arrive at the same endpoint—developers who write to that API can use Amazon or any other provider that supports the API.

You only need three services to get going:  EC2, S3, and a service Amazon should have provided that I will call the “Elastic Data Cloud”.  It offers mySQL without the pain of losing your data if the EC2 instance goes down.  By the way, this is also something a company bent on dominating virtualization or data center infrastructure could undertake, it is something a hardware vendor could build and sell to favor their hardware, and its something some other player could go after.  The mySQL service, for example, would make sense for mySQL themselves to build.  One can envision similar services and their associated machine images being a requirement after some point if you want to sell to SaaS and Web companies.  Big Enterprise might undertake to use this set of API’s and infrastructure to remainder unused capacity in their data centers (unlikely, they’re skittish), help them manage their data centers (yep, they need provisioning solutions), use outsourcers to get apps distributed and hardened for disaster recovery, and the like.

So there you have it, hosting providers, virtualizers, and software vendors:  a blueprint for world domination.  I hope you go for it. I’m building stuff I’d like to host on such a platform, and I’m sure others are too!

Note that the game is already afoot with Citrix having bought XenSource.  Why does this put things in play?  Because Amazon EC2 is built around Xen.  Hmmmm…

Some late breaking news: 

There’s been a lot of blogging lately over whether Yahoo’s support of Open Sourced Hadoop will help them close the gap against Google.  As ZDNet points out, “Open source is always friend to the No. 2 player in a market and always the enemy of the top dog.”  That’s basically my point on the Secret Blueprint for Web Hosting World Domination.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

8 Responses to “How Does Virtualization Impact Hosting Providers? (A Secret Blueprint for Web Hosting World Domination)”

  1. barmijo said

    Interesting post, and as a co-founder of 3tera, the subject is near and dear to my heart.

    3tera faced a choice late in 2005 of whether we should raise money to build out our own data centers or work with partners. In those pre-EC2 days utility computing wasn’t a popular concept so it wasn’t a slam dunk signing up partners or raising capital. We chose to work with hosting providers because of their expertise in efficiently managing large numbers of servers and their entreprenuerial spirit. In this way we were able to make utility computing available with a choice of providers almost from day one. Today, our partners operate data centers with more than 100,000 servers in the US and Europe (not all with AppLogic yet) so the needs of even large users can be met quickly.

    As for standards, we took a different tact – we have no APIs. You can take virtually any Linux code and run it without modification. No scripts to figure our IP addresses. No figuring out how to run a database. Customers have run everything from LAMP to Oracle and Weblogic to video endecs. In addition, you don’t have to create scripts to manage images as that approach ends up requiring you to create your own infrastructure management layer which isn’t something that adds value to users businesses. Leaving that to AppLogic also allows the system to provide universal high-availability to all apps, overcoming hardware failures without user intervention.

    Last, and most importantly, you’ve made a critical assumption that isn’t really highlighted ; that hosting a VM is the end game. We’ve already proven that much more is possible, including backups of entire distributed systems or even data center migrations with a single command – and we’re only getting started.

  2. smoothspan said

    Ben, thanks for your comment!

    RE: “you’ve made a critical assumption that isn’t really highlighted ; that hosting a VM is the end game.”

    No such assumption was on my mind. In fact, my comment above is, “This complexity tells us the virtualization wars are just beginning and VMWare isn’t even close to looking it all up, BTW.”

    The end game assumption I do make is that the end game involves the following:

    – Really big players (MSFT/GOOG/AMZN et al) are going to be players. In their minds they have no choice because they see it as strategic. It is an opportunity to control a critical piece of the Internet Fabric. They want to take this business and they will play for keeps. They have enormous resources and patience, and they are relentless. They will seek to commoditize the space to destroy the profitability of smaller players, and they will do everything in their power to raise the expense side for other players too, as well as leveraging whatever unfair/monopolistic advantages they can muster. For example, they will persuade others to do their work for them. How long before all sorts of key software includes Amazon API support out of the box and Amazon partners do the rest of the infrastructure work you mention? I have seen this up close and personal: I was a general in the Office Wars, having built Quattro Pro for Borland and run Borland R&D at that pivotal time.

    – All small pure propietary players will eventually come up against these big players. Having a better product is not enough to win unless you can win by getting big so quickly the implacable forces of the Dark Side can’t do their damage. Google accomplished this while MSFT and others were asleep at the switch. But Google adoption is almost perfectly frictionless. Switching data centers, let alone switching to a new utility computing fabric is not frictionless. The Dark Side will have enough time.

    – The answer for the big guys that are not #1 or #2 or that see ahead is opening up. By opening up they increase their own footprint to be the biggest and trump their arch foes.

    – The answer, inevitably for small guys, is also to open up in some way. Many small guys can equal one big guy.

    – There is an alternative I failed to mention: Get bought by someone big enough to lend you big guy superpowers long enough so you get big. That’s the VMWare story. They could’ve IPO’d back when, they got priced, they negotiated with EMC and others and got bought for a premium on their then IPO pricing, and they used the resources of EMC to get big.

    There is some good news. There is still time yet. Until the big guys crack all the right codes (why the heck doesn’t Amazon have the mySQL service yet?!??), until one of the big guys opens up, and until a cabble-of-open-small-guys gets some momentum, the market will be relatively unchanged and exciting new ideas can flourish. But the chasm that must be crossed is becoming more visible day by day.

  3. barmijo said

    >>> Switching data centers, let alone switching to a new utility computing fabric is not frictionless.

    But it CAN be frictionless. For instance, a small e-commerce customer, SilkFair, moved their entire presence between providers in SF and Dallas with a single command in just two hours. I believe this is an example of just what you’re writing about.

    btw – mySQL runs perfectly in a virtual data center.

  4. tonylucas said

    Bob,

    Would love to talk to you more regarding this, we are one of the companies that you mention in your article taking Amazon etc on at their own game in building a utility platform. We are already beta testing the load balancing addition to this (which is being cried out for) and have started development on a ‘database cloud’ as well. We’ve also solved a number of problems with the platform that Amazon hasn’t yet (there is a comparison page on the website).

    We completely believe in the point you discussed regarding an open platform/implementation to allow flexibility and removal of any lock in scenario, and are keen to progress this further with anyone who’s interested.

    More info is available at http://www.flexiscale.com, also I’ll be in SF in October for the Web 2.0 Summit if you fancy a coffee 🙂

  5. […] Comments (RSS) « How Does Virtualization Impact Hosting Providers? (A Secret Blueprint for Web Hosting World Dom… […]

  6. […] can do it with a hotel, I can do it with a search engine!)?  In a manner similar to my “Web Hosting Plan for World Domination“, Yahoo could undertake a plan for “Search Engine World Domination”.  […]

  7. […] Scoble can do it with a hotel, I can do it with a search engine!)?  In a manner similar to my “Web Hosting Plan for World Domination“, Yahoo could undertake a plan for “Search Engine World Domination”.  Here’s how it would […]

  8. wehostia.com…

    […]How Does Virtualization Impact Hosting Providers? (A Secret Blueprint for Web Hosting World Domination) « SmoothSpan Blog[…]…

Leave a Reply

 

Discover more from SmoothSpan Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading