The Eternal SaaS Loss of Control Angst
Posted by Bob Warfield on November 11, 2007
Just caught another wringing of the SaaS-loss-of-control hands. Niall Sclater via Aloof Architectures is worried that if teachers use 3rd party sites (the “lots of small pieces” strategy) versus a purpose built and maintained system running in the University data center that it will be a disaster if the 3rd party sites are down because students wouldn’t be able to get their work done on time. In this case the panic is over slideshare, which evidently went down for some maintenance.
This sort of thing comes up over and over again where SaaS is concerned. What will we do if the service is down? We have no control, we’ll be stuck!
I’ve got bad news: you’re stuck whether you rely on SaaS or your own On-premises solution. Either one can go down. The question is, “Which one is more likely to go down?”, closely followed by, “Which one will be faster bringing things back up?”
Having seen both sides of this fence, I can’t help but place my bets on SaaS. Why? Because the SaaS side can bring the full resources of the people that wrote the software together with the foremost experts on running the software to bear on the problem. The SaaS vendor can invest in actually doing all of the things that others like to talk about, but often don’t actually implement very well. Things like multiple redundant offsite backups and hot failover capabilities. I’ve lived through these fire drills, and let me tell you, IT was almost never going to get the software back up and running without help anyway unless the problem was relatively minor. Escalations to the vendor are a normal matter of course. Unfortunately, if the vendor is at the other end of the phone line trying to talk the customer’s IT people through it, things are an order of magnitude harder than if the software is running directly in the vendor’s own datacenter where they can touch it and make it work.
Here’s another one to consider. The SaaS vendor is in the business of uptime to a much greater degree than in-house IT. Why? Because a multitenant stumble carries with it a much greater cost of failure than some temporary downtime at a single customer. This stuff has to work for the SaaS vendor, and most are turning in admirable scores for their SLA’s.
Consider also the testing that goes on with SaaS, and the ability of the SaaS vendor to head off problems for customers before they materialize across much of the user base. At any given time, many customers will be doing many different things with the software. Much more than a single customer’s usage could hope to cover. Much more than in-house QA can hope to cover. As such, problems get flushed out quickly. The savvy SaaS vendor is fixing those problems inline and rolling the fixes out to everyone before most customers ever see the problem. I’ve talked to On-premises companies that report 40-70% of tech support problems are fixed in the latest release. Customers are reporting those problems because they’re not on the latest release. With SaaS, everyone stays with the latest release, so a huge number of problems are never experienced.
The next time you’re worrying about a loss of control, ask yourself. Who do you want to have in control? The world’s foremost experts on the software you’re using? Or some folks that may be quite good, but that are far less experienced? The answer is really not that hard once you think about it.