SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for October, 2007

Multitenancy Can Have a 16:1 Cost Advantage Over Single-Tenant

Posted by Bob Warfield on October 28, 2007

Multitenancy is one of those things that has been more qualitative than quantitative.  True blue SaaS believers put it down as a must-have to even call an offering SaaS.  Those less devout are understandably somewhat skeptical.  The heretics will say that liberal use of modern virtualization technologies is good enough.  Everyone seems to agree that the purpose behind multitenancy is to lower costs for the SaaS vendor.  These savings are translated into lower costs to their customers and a better shot at profitability.

In this article, I wanted to try to quantify those savings as well as put forward a definition for multitenancy that I think is clearer than a lot of what I’ve seen out there.  You’ve got the short answer in the title of this post, but I’ll walk you through how I got to the 16:1 value. 

First, let’s try to get to a useful definition of multitenancy.  Wikipedia defines multitenancy as follows:

Multitenancy refers to the architectural principle, where a single instance of the software runs on a software-as-a-service (SaaS) vendor’s servers, serving multiple client organizations (tenants). Multitenancy is contrasted with a multi-instance architecture where separate software instances (or hardware systems) are set up for different client organizations. With a multitenant architecture, a software application is designed to virtually partition its data and configuration so that each client organization works with a customized virtual application instance.

This definition is fine up to the point where the word “partition” comes up with a link back to a discussion of LPARs which are defined as follows:

In computing, a logical partition, commonly called an LPAR, is a subset of computer’s hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple LPARs, each housing a separate operating system.

There is a lot of confusion here that comes from attaching a particular notion of how Multitenancy might be implemented (LPARs) with the concept itself.  I found the same thing reading SaaS-Man’s “Myths of Multitenancy” where the “myths” are based on particular implementation assumptions.  Let’s get one thing absolutely crystal clear before going further: there is no single implementation or architectural design pattern that can be called multitenancy.  Rather, multitenancy is an abstract concept largely rooted in benefits more than features.

To get away from implementation specific definitions (which are more examples than definitions), I am defining multitenancy as software that has the following properties from the perspective of three audiences seeking benefits:

For the SaaS Vendor:  Multitenancy is the ability to run multiple customers on a single software instance installed on multiple servers.  For the vendor,  operations can be performed at the level of instance, tenants, and users within a tenant.  This is done to increase resource utilization by allowing load balancing among tenants, and to reduce operational complexity and cost in managing the software to deliver the service.  For example, a patch can be easily rolled out to all tenants by patching a single instance instead of many.  Everyone’s data can be backed up in one operation by backing up a single instance.  The operations costs are then lower due to economies of scale and increased opportunities for automation.

For the SaaS Customer (a Tenant):  Multitenancy is transparent.  The customer seems to have an instance of the software entirely to themselves.  Most importantly, the customer’s data is secure relative to other customer’s data and customization can be employed to the degree the application supports it without regard to what other tenants are doing.  The only tangible manifestations of multitenancy are lower costs for the service, and better service levels because its easier for the service provider to deliver those levels.  The tenant will also want the ability to manage the system from their perspective (for example to manage security access among their users) and will want that to be as seamless as possible and certainly to never be impacted by other tenants.

For the SaaS User (one seat on the Tenant’s account):  Multitenancy is transparent.  They just see the application as a user of any application would. 

Further definiton around multitenancy gets into the realm of being too specific about implementations.  For example, there are folks who say multitenancy means an application won’t be highly customizable because it’s too hard to build.  It makes no sense to generalize about the properties of multitenancy on such a basis, so it should be left aside as something that relates to specific examples of multitenancy.  There are implementations of customizability that are incompatible with multitenancy, but I know it is entirely possible to build multitenant systems that are highly customizable if the proper metadata approaches and other customization tools are provided.  Eventually, I will write a post about how this can be done.

The details of a particular implementation of multitenancy can vary greatly from one vendor to the next.  It makes sense the details should vary just as Enterprise apps vary a lot from one domain to the next.  For example, most web software vendors like Twitter probably don’t think of themselves as Multitenant architectures, but they are with the degenerate case that a User and a Tenant are one and the same.

I bring up the odd example of Twitter because it is the start of one dimension that matters a lot for implementation choices:  number of users per tenant.  A single user per tenant is a nice fine grained unit.  In many ways it may be the easiest to create because load balancing and partitioning are so much simpler, although I don’t want to say they are simple by any means.  This is a meaningful class of SaaS, BTW, because most desktop productivity applications can be modeled in this way, so it’s relevant to Google’s view of what SaaS is, for example.

Next up would be to have a fairly small number of users per tenant.  Salesforce.com had an average of 21 seats per tenant when I recently did the calculation.  That’s very small.  If you are sure you’ll never have many more than that based on the nature of your application, you can largely treat this the same way as one per tenant with a few tools to make it easy to manage small groups of users as a unit.  Of course, Salesforce can’t do that, because they also have much larger customers than just 21 seats!

Let’s jump over to the opposite end of the scale, because it is an interesting perspective too.  Suppose the application will average more like 1,000 or even more users per tenant.  Suddenly the server utilization and automation arguments are a little bit less valuable.  We have 50x larger tenants than in the Salesforce example.  We have 50x fewer customers to reach the same level of seats.  Whatever repetitive tasks we may perform for every tenant have to be done 50x less often even if we do them manually on a tenant-by-tenant basis.  Is this an argument not to go multitenant for such applications?  Possibly, because multitenancy offers a lot less benefit to such a domain.  One could envision that virtualization and a lot of automated scripting could get us nearly all the way there, at least until we had so many of these large customers that it made sense to do something more sophisticated.  Interestingly, several SaaS vendors I talked to have mentioned that the management overhead for a customer is nearly the same almost regardless of size.  There are some operations that are size dependent, but a properly implemented SaaS app pushes as much of that sort of thing as possible back to the customer as self-service (for example, to manage accounts and passwords).

Okay, we have a working definition that’s very simple:

Multitenancy is the ability to run multiple customers on a single software instance installed on multiple servers to increase resource utilization by allowing load balancing among tenants, and to reduce operational complexity and cost in managing the software to deliver the service.  Tenants on a multitenant system can operate as though they have an instance of the software entirely to themselves which is completely secure and insulated from any impact by other tenants.

Let’s turn now to the question of estimating the cost benefits of a well constructed multitenant architecture.  I’ve collected statistics on the relative cost to provide service for a number of public SaaS offerings for which such numbers are available:

Cost of SaaS 

We can see a mix of business SaaS like Salesforce.com as well as web software companies like Google.  The first takeaway is that costs can vary quite a lot from one company to the next.  This is a function of pricing (selling too cheaply relative to the costs of delivery raises the %) and of internal operating efficiencies.  It’s hard to draw much conclusion, so let’s just work from the average of 26%.  An average web software company can deliver $1 of revenue for about 25 cents spent on hosting and managing their software for customers.

Lest you be thinking that all of the Cost of Service is hardware, let’s consider the case of Google.  Gartner reports that runs on the order of 1 million commodity class machines at this time.  Even if we assume they’re replaced annually, which they aren’t, that places cost per machine at $4,225 and the same report says they spend about $1,800 on a machine, storage, and all hardware.  The rest of the cost is for network connectivity, datacenter physical space, and personnel to run all those little boxes.

I want to emphasize that we absolutely can’t be too cavalier about even the 1,000 seat per tenant case when it comes to operational efficiency.  At $50/seat, that customer is paying $600K of revenue per year.  If the SaaS vendor wants to spend 25% to deliver the service, they can afford to pay no more than $150K to keep the lights on for that large customer.  Much of that will have to go for the hosting (hardware, network charges, and datacenter).  What’s left is a fraction of an IT person for operations on this tenant.  Even a vendor with such gross numbers of seats per tenant must therefore be able to deliver the service very efficiently.  Considerable automation of manual tasks will be necessary.

Now let’s consider the costs of running traditional Enterprise on-premises software.  Talking to SaaS vendors about how they came by their pricing, there is a rule of thumb that says the annual contract cost of a SaaS offering is roughly equal to the perpetual license most vendors sell.  Oracle’s Timothy Chou says the industry averages are that it costs 4x the license per year to run traditional enterprise software.  Can you see how I got to a 16x operational efficiency advantage for SaaS over conventional enterprise on-premises?  Take Chou’s 4x and divide by 26% and you’re there.

I walk away from this analysis with two big takeaways.

First, SaaS vendors need to manage by the numbers pretty tightly when it comes to the cost of service delivery.  They’re very unlikely to hit the 26% average on their first release, and I’ve been told this by a number of young SaaS companies and their venture capitalists.  That means being on a continuous program of improvement.  If it was me, I’d want a dashboard of live metrics around these costs that really kept it top of mind together with trending so I could be sure costs were moving in the right direction.  Achieving a 16x advantage over conventional thinking is not a seat of the pants project.  It’s going to take some real effort and some real smarts to get there.   

Second, this advantage is an interesting concept for SaaS customers to contemplate.  Delivering a service efficiently means not screwing it up.  Escalations and problems will drive such costs through the roof.  The SaaS vendor really only has the choice of making the customer happy, and doing so in a highly automated way that ensures the happiness is no accident.  Isn’t this really the lesson Japanese car manufacturers used to become so successful?  Focusing on quality actually lowers costs.  SaaS vendors live and breath exactly this because they’re financially incented to do so.  On-premises vendors, by contrast, get paid a lump sum up front and don’t have this operational efficiency monkey on their backs.  The customer bears that cost, and apparently, it is a cost that’s 16x greater.

Is it any wonder SaaS vendors see multitenancy as their Holy Grail?

Posted in business, enterprise software, platforms, saas, strategy | 22 Comments »

SaaS, Cloud Computing, and Liability for Security Breaches

Posted by Bob Warfield on October 26, 2007

There’s an interesting post by Larry Dignan about the TJX legal activity surrounding a data breach that exposed customer’s credit card information.  It seems the banks are suing TJX to make it their problem.  In the past, banks and credit card companies had to eat the expense.  The TJX breach involved the theft from its system of over 100 million credit card numbers by unknown intruders.  This wasn’t a case of a tape falling off a truck or a bug in software inadvertantly publishing the numbers–criminals hacked into the system during an 18 month period to steal the data.  This is the largest such theft of its kind ever reported.  In addition personal data on about  451,000 individuals was stolen by accessing a system relating to their return of merchandise without receipts in 2003.

The lawsuit alleges the breach happened due to conclusions found by TJX’s own consultants doing forensic work after the crime.  They said the company failed to comply with 9 of the 12 PCI-DSS standards among other things:

-  An improperly configured wireless network.  Note that this is outside a secure datacenter–an important point to keep in mind with supposedly secure apps.

-  Outdated WEP encryption on the wireless network, a practice many other retailers suffer from.

-  Failure to segment sensitive data and treat it differently.

-  Retention of credit card information that shouldn’t have been retained under PCI-DSS standards.

-  Apparently just one store in Florida was compromised sufficiently to lead to the calamity.  The data, about 80 GB, was transferred over the Internet to a site in California. 

-  In addition, a sniffer was installed on the network to capture credit card info which was being transmitted in the clear.

-  The consultant deposed noted that they had never seen such a “void of monitoring and capture via logs of activity.”

The list goes on for pages if you read the legal filings, but this gives a  nice understandable idea of what went on.  According to the plaintiffs, at least, TJX was even aware of a lot of the problems from audits the year before but had failed to fix things.

I read this after posting the last of my 3Tera interview, and it struck a chord with something from the interview.  The 3Tera guys talk about how data privacy regulations are increasingly driving datacenter centralization and outsourcing.  What a great concrete example this is, and it cuts both ways. 

It puts a greater onus on those running datacenters to keep them secure or face the consequences.  This puts more pressure on SaaS and other cloud computing vendors to deliver a very high quality product.  But secondly, it means that IT organizations will also have even greater expenses around their in-house software activity too in order to secure it.  If nothing else, the article on the TJX breach I linked to above mentions they weren’t even sure of a lot of what was stolen because they had routinely deleted the data.  What are the chances TJX and others will decide to archive a lot more information in the wake of all this?

Inevitably, this will lead to exactly the kind of legislation the 3Tera guys mention.  This legislation will drive the kind of certification companies need to have around various kinds of data.  Europe is already off to a head start on this front, but the US will surely follow.  It may not even take legislation.  The civil legal system may create ramifications for how datacenters operate. 

Let me give an example.  As I understand it, your damages relating to use of stolen IP in software are dramatically less if you can show that your organization took steps to verify it wasn’t using someone else’s IP.  There are companies today that make a business of scanning your source code and looking for suspicious entries.  The CFO or General Counsel may mandate such scanning simply because it is cheap insurance relative to treble damages if you can prove by doing so your organization took reasonable steps. 

So it is with these security issues.  The more certifications along the lines of SAS-70, the more opportunity you have to tell the lawyers that your organization took reasonable steps and therefore shouldn’t be held liable, or at least not liable for as much.  SAS-70, incidentally, is an auditing standard promulgated by the AICPA.  There are many more such standards out there, such as those associated with Sarbanes-Oxley.  When you look at the impact Sarbanes Oxley has had on small public companies, it isn’t hard to see that this kind of thing will drive the SMB market to doing more and more in the cloud using SaaS and other mechanisms because they just won’t be able to afford to certify their own projects the way the bigger companies can.

In TJX’s case, they would have benefited from using a more standardized product, running in a more modern datacenter, with all the safeguards and certifications in place.  A modern thin client could run HTTPS which adds additional much more secure encryption over the nearly useless wireless WEP protocol TJX was using.  In short, it’s hard to see how a respectable SaaS vendor would have fallen into the same traps precisely because customers would have insisted on audits like PCC-DSS and they’d have insisted the guidelines had been followed.  For their part the SaaS vendor should be touting those things as advantages and amortizing the cost over multiple tenants so as to make it cheaper for customers to have the additional security.  Sure a SaaS vendor could make a mistake, but doing so leaves that vendor liable more than the customer. 

I can already hear the arguments that if TJX had only done the right things with their on-premises software, there’d been no issue, so why is Bob making this out as a SaaS thing?  If the trend to take the litigation route on these things continues, companies will have a lot more to think about before undertaking to accept all of that liability themselves.  Let’s also consider that TJX apparently knew of the deficiencies but did not take action.  Why then, didn’t they take action?  I have no data to support this, but in my experience this almost always boils down to issues of cost.  TJX probably had the best of intentions, back lacked resources, budget, and time to make the fixes before time ran out for them.  It was a costly mistake, but again, it seems like costs are something that SaaS greatly alleviates. 

Posted in business, saas, strategy | 1 Comment »

Interview With 3Tera’s Peter Nickolov and Bert Armijo, Part 3

Posted by Bob Warfield on October 26, 2007

Overview

3Tera is one of the new breed of utility computing services such as Amazon Web Services. If you missed Part 1 or Part 2 of the interview, they’re worth a read!

As always in these interviews, my remarks are parenthetical, any good ideas are those of the 3Tera folks, and any foolishness is my responsibility alone.

Utility Computing as a Business

You’ve sold 100 customers in just a year, what’s your Sales and Marketing secret?

3Tera:  We’re still in the early growth phase, our true hockey stick is yet to come, and we expect growth to accelerate.  Right now we’re focused on getting profitable.

We don’t have a secret, really.  We have a very good story to tell.  We’re attending lots of conferences, we’re buying AdWords, we’re getting the word out through bloggers like yourself, and we’re getting a lot of referrals from happy customers.

The truth is, the utility computing story is big.  People hear about Amazon and they start looking at it, and pretty soon they find us.  It’s going to get a lot bigger.  If you read their blogs, Jonathan Schwartz at Sun and Steve Ballmer at Microsoft are out talking to hosters.  Hosting used to be viewed as a lousy business, but the better hosters today are growing at 30-40% a year.  This is big news.

Bob:  (I think their growth in just a year has been remarkable for any company, and speaks highly to the excitement around these kinds of offerings.  Utility computing is the wave of the future, there is a ton of software moving into the clouds, and the economics of managing the infrastructure demand vendors take a look at offerings like 3Tera.  We’re only going to see this trend getting stronger.)

Tell us more about your business model

3Tera:  We offer both hosted (SaaS) and on-premises versions.  As we said, 80% choose the hosted option.  The other 20% are large enterprises that want to do things in their own data center.  British Telecom is an example of that.

We sell directly on behalf of our hosting providers, and there are also hosting providers that have reseller licenses.  Either way, the customer sees one bill from whoever sold them the grid.

Bob:  (This is quite an interesting hybrid business model.  Giving customers the option to take things on-premises is interesting, but even more interesting is how few actually take that approach:  just 20%, and those mostly larger enterprises.  It would make sense to me for a vendor looking to offer both models to draw a line that forces on-premises only for the largest deals anyway.  3Tera’s partnering model with the hosting providers is also quite interesting.)

How do you see the hosting and infrastructure business changing over time?

3Tera:  There are huge forces at work for centralization.  Today, if you are running less than 1000 servers, you should be hosting because you just can’t do it cost effectively yourself.  Over time, that number is going up due to a couple of factors.

First, there is starting to be a lot of regulation that affects data centers.  Europe is already there and the US is not far behind.  There are lots of rules surrounding privacy and data retention, for example.  If I take your picture to make a badge so you can visit, I have to ask your permission.  I have to follow regulations that dictate how long I can keep that picture on file before I dispose of it.  All of this is being expressed as certifications for data centers such as SAS-70.  There are other, more stringent standards out there and on the way.  The cost of adhering to these in your own data center is prohibitive.  Why do it if you can use a hosted data center that has already made the investment and gotten it done?

Second, there are simple physics.  More and more datacenters are a function of electricity.  That’s power for the machines and power for the cooling.  I talked to a smaller telco near hear recently that was planning to do an upgrade to their datacenter.  This was not a new datacenter, just an upgrade, and not that big a data center by telco standards.

The upgrade involved needing an additional 10 megawatts of power.  The total budget was something like $100 million.  These are big numbers.  The amount of effort required to get approval for another 10 megawatts alone is staggering.  There are all kinds of regulations, EPA sign offs, and the like required.

Longer-term, once you remove the requirement for humans to touch the servers, it opens up possibilities.  Why do we put data centers in urban areas?  So people can touch their machines.  If people didn’t have to touch them, we’d put the data centers next to power plants.  We’d change the physical topology and cooling requirements to be much more efficient.

We want people to think of servers the way they think about fluorescent tubes in the office.  If a light goes out, you don’t start paging people and rushing around 24×7 to fix it.  You probably don’t fix it at all.  You wait until 6 or 8 are out and then you send someone around to do it all at once, so it’s cost effective.  Meanwhile, there is enough light available from other tubes so you can live without it.  It’s the same with servers once they’re part of a grid.

Conclusion

The changes in the industry mentioned at the end of the interview are quite interesting.  Legislation is not one I had heard about, but it makes total sense.  Power density is something I’d heard about from several sources including the blogosphere, but also more directly.  I met with one SaaS vendor’s Director of IT Operations who said the growth at their datacenter is extremely visible, and he mentioned they think about it in terms of backup power.  When the SaaS vendor first set up at the colo facility, it had 2 x 2 Megawatt backup generators.  The last time my friend was there that number had grown to 24 units generating about 50 megawatts of backup power.  For perspective, an average person in the US uses about 12,000 watts, so 50 megawatts is enough for a city of over 4,000 people.

Another fellow I had coffee from this morning runs all the product development and IT for a large well-known consumer focused company on the web.  He mentioned they now did all of their datacenter planning around power consumption, and had recently changed some architectures to reduce that consumption, even to the point of asking one of their hardware vendors to improve the machinery along those lines.

These kinds of trends are only going to lead to further increases in datacenter centralization and more computing moving into the cloud to increase efficiency, centralize management to make it cheaper, and load balance so fewer watts of energy need be consumed idling.

Posted in data center, grid, platforms, saas, Web 2.0 | Leave a Comment »

The World Wants Results, Not Promises

Posted by Bob Warfield on October 25, 2007

Nick Carr says platforms want to be free, but that title is a bit misleading.  His examples, Amazon and Google, are platforms that charge for results and not promises.  They’re not really free, but they match the expense a customer pays much more closely to the value the customer receives.  In Amazon’s case, there is no listing fee, only a 15% fee if the sale is made.  For Google, you don’t pay for the ad unless someone clicks through.

This is not limited to selling and advertising platforms.  SaaS and Open Source are using this disruptive model as well.  Let me explain.

The basic mantra behind commercial open source is that dabblers can play with it as much as they like and pay nothing.  I have mySQL installed on my machine as we speak.  I downloaded it from the Internet, I’m playing with it, and I pay nothing.  However, there are incentives so that if one wants to make a lot of money from open source, they need to pay.  This is accomplished in a variety of ways, and Joe Cooper has a great roundup of how Open Source vendors create these dual tracks where they can give away source code and let people use the software for free, but make money when others make money on the Open Source.

SaaS also matches cost to benefit much more closely than traditional Enterprise Software.  In a recent article, Chris Cabrera (CEO of SaaS vendor Xactly) writes:

In the on-demand world, the customer is truly in the driver’s seat. The software is “rented” on a per-month basis, and if a vendor does not deliver a consistently high level of value or fails to meet expectations, the customer can cut that vendor off in a blink of an eye — almost as easily as switching mobile phone services. Hence, customers need to be central to everything a successful on-demand vendor does, from product development and implementation to partnering with other vendors and customer care.

This approach contrasts tremendously with traditional enterprise software, with its lengthy implementation cycles, long-term licenses and enormous sunk costs that breed customer inertia. With enterprise software, customers wait months or even beyond a year for their application to be scoped, tested and deployed.

The customer may never be successful with the traditional enterprise software, but they’ve paid the big bucks up front.  They paid for a promise, instead of for results.

Posted in business, strategy | 1 Comment »

The Medici Effect: How Do You Make Creativity A Process?

Posted by Bob Warfield on October 25, 2007

This post in Jeff Monaghan’s blog struck a chord with a process I’ve used for a long time to stimulate creativity:

The basic premise is that true creativity can be found through the cross-fertilization of ideas from different, and unrelated fields.

I will broaden it a bit to reflect my own process:  true creativity can be found through exploring the unknown relationships of unrelated ideas.  In the most extreme, the ideas may even be randomly generated.

How can this work?

Consider a brainstorming exercise.  Take as many ideas as you can that are interesting to you no matter what the reason.  Write them on slips of paper (or do it in software if you prefer) and put the slips in a hat.  Shuffle, and start pulling out pairs of slips.  Write down the combinations.  The idea for SmoothSpan happened because “Viral” and “Enterprise Software” happened to come out of a hat at the same time.  I will say no more about SmoothSpan at this time, and people familiar with the idea will likely say it isn’t viral at all, but it was that unlikely juxtaposition (after all, what Enterprise Software even wants to be associated with the idea of being viral?) that got the creative juices flowing.

For some people, this process is automatic.  These are the intuitive thinkers.  If you are an overly top down and logical thinker, don’t underestimate the value of adding a randomizer to break you out of your rut and help you see around corners.  There are two other helpful techniques I will add to this. 

First, creativity is often stimulated by conversation if you have an open mind.  When you start out explaining an idea, the other person will often leap to an unexpected conclusion about what you’re trying to say.  Don’t scold them or drag them back on track too quickly.  Register their misconception and think about whether it isn’t an improvement on your idea rather than an error.

Second, learn to think about isomorphisms and abstractions.  Isomorphism is a fancy mathematical term for what a lot of folks would call a metaphor.  Technically, an isomorphism is a structure-preserving mapping.  Practically, if you think in those terms, you more reasily recognize when apparently unrelated things contain principles that apply to one another.  Abstraction is related in this context.  Abstraction involves eliminating details that don’t matter to the behaviour of a thing until we have the most generic possible view of the thing.  It’s like jumping up to 100,000 feet to look at it.  If you abstract unrelated things, it will be easier to see the isomorphisms that may link the two together because there is less detail to confuse the issue. 

Perfect those techniques and you’ll be borrowing useful insights from everything you encounter.

Posted in business, strategy | 4 Comments »

Social Network Fatigue: Different Personalities Want to Spend Time Differently

Posted by Bob Warfield on October 24, 2007

Do you have children?  If so, you’ve probably heard them say to you many times, “I’m bored.”  Kids are always in search of something to do.  Something other than homework and chores, that is.

Do you know many adults like that?  Not too many here.  Most adults I know complain that they don’t have enough time to do what they’d like to be knowing.  Most know exactly what they want to do.  If they express boredom, it is often about a job or the lack of a relationship, but they don’t mean it in the same way kids do.  They don’t usually want you to immediately drum up something to alleviate their boredom.  Most adults are reasonably comfortable sitting quietly for a few minutes of quiet introspection.  They often welcome the calm.

What does this have to do with Social Networking?  I’ve come to worry that this need by my kids to be entertained by something all the time, no matter how trivial, says something profound about Facebook and more generally about how to construct a Social Network to appeal to different audiences.

There has been a lot of posting in the blogosphere about how college kids are amused by attempts to use Facebook for business.  Joyce Park, creator of a successful Facebook application for sending your friends virtual drinks says:

No one makes money on Facebook. Not even Facebook.

We all know that their ads have low (the worst?) clickthroughs.

It’s not clear that FB users have high purchase intentions. Why should they?

What are FB users doing?

Dating. And looking at pictures of people they’d like to date. Trying to pick up.

Folks, FB is A DATING SITE FOR COLLEGE STUDENTS! If you’re not in college, get over it.

Facebook is for people without jobs. And while that’s good for their users, it’s not for you.

That’s really telling it like it is and pulling no punches!  To underscore her point, Park goes on to quote Compete’s figures that show 50% of time on Facebook is looking at other people’s Profiles.

I have noticed on Facebook that while I have a lot of friends, most are dabbling at best, or inactive at worst.  It is the folks who are vicious networkers and extreme extroverts who seem to get the most from it.  Many others just want to be seen as being on Facebook, but aren’t really using it for much that I can see.

Zoli Erdos continues by saying:

The Facebook Fanclub’s recurring theme in comparing LinkedIn to Facebook is just how resume- and jobsearch-oriented LinkedIn is: go there, get what you want, then there’s nothing else to do there.

I’m sorry, but since when is this a complaint? Isn’t business all about having an objective and efficiently reaching it with minimum the time and effort? I suspect most of the LinkedIn “deserters” who switched to Facebook are independent types who have the time to hang around in Facebook, and are striving to enhance their personal brand.

Doc Searl has been saying life is too short for Facebook for some time:

Anyway, lif’e’s too short, and this list of stuff is too long. If you’re waiting for me to respond to a poke or an invitation,or a burp or any of that other stuff, don’t hold your breath. Or take offense. I’ve got, forgive me, better things to do.

All that poking and sending of virtual drinks does feel a lot like kids trying to avoid boredom or perhaps flirting behaviour to go back to what Joyce Park is saying. 

I’m not sure why, but this Phil Windley quote today (which I read right after Doc Searl) really rang my bell:

The problem is that Facebook is annoying because that’s what works. Facebook’s success depends on bothering you incessantly and creating things for you to do.

I think Phil is trying to say that Social Networks succeed by doing this, but it crstallized an entirely different reaction from me.  It makes me feel old to say it, and it may even be career limiting and oh-so-not-hip, but I’m beginning to think:

Facebook is software created by kids for kids to solve a kid’s biggest problem:  they’re bored.  They want something to do.  They want a date.  They have tons of time and no idea what to do with it.

That pretty much means Facebook isn’t going to be the whole Internet, because an awful lot of people, people who have the most money to spend, don’t suffer from the disease that Facebook cures.  So now I’m stuck in the middle.  The Kara Swisher’s of the world will be saying, “Duh, we’ve been saying this for so long you are late to the party.”  The Facebook True Disciples will be saying, “That guy is so old he doesn’t even understand Facebook.” 

C’est la vie!

From a broader perspective, I think this has ramifications for other would-be Social Networks.  Think about what your target audience wants to get out of the experience and make sure they’re getting it.  That may seem trite, but it’s important.

Related Articles

Microsoft pays $250M for a stake in Facebook that values the company at $15B.  Apparently advertising to kids who want to date is extremely valuable!

GigaOm is skeptical about advertising on Facebook

Mathew Ingram says the deal makes sense because its worth $250M for Microsoft to:

1. It gets to keep Google out, so that’s good.

2. It gets to serve ads to those millions of devoted users who check their Facebook every five minutes.

3. It has effectively bought a call option on the future of the company.

And besides, they only had to come up with $250M, not $1.5B. 

I’ve said in the past it’s a brinksmanship move.  Microsoft can easily afford $250M to push Facebook out of reach on total valuation.  This can have interesting consequences too.  It may make Facebook over confident and cause them to shoot for the moon and lose in the end.  Towards that end, it is interesting the MSFT ultimately invested only half what was being talked about.  I guess they aren’t completely bullish on the idea!

I read that overall traffic to Facebook grew 102% year on year, but that the growth for the 35 year old and older demographic was only 19% for the same period.  Adds credence that this one is for kids, but I’d love to see the data for the 25 year old break over.

Scoble is aggressively rumor mongering on what this means. 

Posted in strategy, user interface, Web 2.0 | 4 Comments »

The Desktop Isn’t Dead, But Why?

Posted by Bob Warfield on October 24, 2007

I liked Ryan Stewart’s title so well, I borrowed for further commentary on the subject of why desktop software isn’t dead.  This is all prompted by comments made by Jeff Raikes, president of Microsoft’s Business Applications group, which includes Microsoft Office.

Despite Office 2007 not being on everyone’s list of favorites (I still find I have to get people to download the converter pack for older Office versions because they don’t want to upgrade), the Microsoft desktop is alive and well.  Raikes focuses on the example of Google Gears as evidence that even web companies want a desktop presence.  Ryan Stewart points out that this is just the offline connectivity story and there is a lot more to it.

I agree with Ryan, but think we should not underestimate offline connectivity either.  Tools like Gears and AIR are important if they support the ability to keep working away from an Internet connection.  I’m living in Silicon Valley, and there are frequent times when I can’t get a WiFi hot spot.  I imagine it’s much worse in other places.  If nothing else, you have to consider airplane time (at least until they all get WiFi!) as time when a lot of business professionals are working on their laptops.

As for factors beyond the offline story, Ryan mentions several:

It’s about branding, it’s about things like file type registration, operating-system drag and drop and being able to leverage the local resources for computing power.

This is all good stuff.  It makes me wonder given how large a share Internet Explorer has whether we’ll actually see Microsoft cooperate with giving web apps such capabilities or whether it will have to be done despite Microsoft.  Ray Ozzie has spoken about extending the browser in areas such as cut and paste, so we’ll see.  Despite my view that Microsoft has a rift with the web, this would be a good way for them to offer a show of good faith.  Incidentally, as Microsoft has uptake problems with new releases, a recurring revenue model as offered by web applications is a good business opportunity.  The trick will be to carefully navigate those waters without the usual disruptive effect of converting short term license to long term payments that SaaS has.

Lest you think I do nothing all day but bash Microsoft, I want to bring up what I think is the most important criticism of web apps versus the Microsoft desktop.  To be blunt, the other applications are just not good enough yet.  When I read that Google’s spreadsheet is still adding very basic features like the ability to hide columns, it gives me pause.  That’s pretty scary.  We had that function is the world’s first spreadsheet, Visicalc.  Don’t get me wrong:  MS Office is seriously bloated, and a meaningful web competitor need not rebuild all of that.  However, it feels like we have a little ways to go before critical mass is reached. 

A second issue is stability.  I love Rich Internet Applications and seek them out at every turn.  Yet they are quirky.  I’m composing this blog post in the WordPress blog editor online.  It’s quirky!  Every now and again it hangs up.  I’ve lost a couple of articles, never very much data, but it is concerning.  Clipboard functionality is very funky.  Being able to translate rich text to HTML remains problematic for most of these applications.  Perhaps HTML isn’t quite expressive enough, but I understand even Adobe’s BuzzWord doesn’t get this done.

These are minor growing pains that will ultimately go away.  A lot is reflective of attitude among the web companies.  Don Dodge nails this in his post, “Why Google Will Fail in Enterprise Software”.  Dodge is writing specifically about Tech Support, but it boils down to attitude.  Google doesn’t value Tech Support is Dodge’s message. He admits it doesn’t take rocket science, and I’ll add that Microsoft Tech Support is no picnic, but it is available.  So too with lack of features and bugs in web software.  Many of these companies are run by youthful founders who are more akin to the consumer web space than anything business.  To riff on Dodge’s remarks, these companies don’t value what business users do so much as what’s cool to them.  We have to move away from the idea that it is so amazing we can run a spreadsheet or word processor on the web and over to the focus of making it work so well everyone will want to do it.  The novelty has to pass.

Is it possible?  Yes, absolutely.  Business SaaS follows these dictates and values the right things already.  It comes out ahead of most traditional business software on these measurements.

It’s still very much a horse race for the desktop.  The factors above mean there is time to respond.  Microsoft can continue to own it if they give a little.  They can enhance the browser and have a headstart building software around those enhancements, one of their traditional advantages.  Their culture is already built around more complete feature sets, stability, and security.  Business trusts them, despite what many on the web would say.  An ideal starting point would be to meet these online office apps with Microsoft offerings that are much less full-featured (but still featured enough) than Office 2007.  Offer these at a monthly add-on for Office 2007 purchasers to protect the underlying revenue stream behind Office.

Microsoft’s biggest challenge is to embrace a scary proposition without being afraid of hastening it.  Their second biggest challenge is to overcome internal inertial forces that lead to the incredibly long development cycles Microsoft currently experiences.  Web software turns the crank many times faster.

Related Articles

Google exec talks innovation: can you find the Apps pitch?  Google Apps are not yet innovative per my post above.

Posted in business, strategy, Web 2.0 | 6 Comments »

Interview With 3Tera’s Peter Nickolov and Bert Armijo, Part 2

Posted by Bob Warfield on October 24, 2007

Overview

3Tera is one of the new breed of utility computing services such as Amazon Web Services.  If you missed Part 1 of the interview, it’s worth a read!

As always in these interviews, my remarks are parenthetical, any good ideas are those of the 3Tera folks, and any foolishness is my responsibility alone.

What can a customer do with your system, and what does it do for them?

3Tera:  AppLogic converts applications, all of the load balancers and databases, firewalls and Apache servers, to metadata. This is done simply by drawing the topology of the application online, as if it were a whiteboard.  Once that’s done, your application becomes completely self-contained. It’s portable and scalable. In fact, it literally doesn’t exist as an app until you run it, at which point AppLogic instantiates it on the grid.

A 2-tier LAMP stack application can be set up in 5 to 10 minutes.  Once you’re done, you can operate your application as easily as you open a spreadsheet. Starting, stopping, backup and even copying are all single commands, even for applications running on hundreds of cpus. You can also scale resources up and down, from say 1 ½ cpus to 40 cpus, which covers a range of say 6 to 3000 users. You can make copies, or even move it to another data center. 

To make it easy to use existing software, we have what we call Virtual Appliances, which are a combination of virtual machine, virtual network and virtual storage that act together. You can install almost any Linux software in a virtual appliance and then manage the appliance as a unit, which is better than having to go machine by machine.
Applications are then created by simply dragging and dropping Virtual Appliances on an online whiteboard, and become a type of template.  We offer a bunch of these already configured for lots of common applications like the LAMP Stack, and there’ s even one for hosting Facebook applications that somebody did.  Probably half our customers bring everything up with these pre-defined Virtual Appliance templates and they never change anything there, they just run.

In a couple of weeks we’ll introduce new functionality we call Dynamic Appliances, a sort of Infrastructure mash-up, that let’s you package data center operations in the same way.  You can implement performance based SLA’s, offsite backups, and virtually any other policy.  Once added to an application, the app will then manage that function for itself, becoming more or less autonomous.

Our larger Enterprise customers have told us how hard it is (impossible really) to implement standard policy across several hundred applications, but we make it easy with Dynamic Appliances because you’re dealing with the needs of just the specific application.

The bottom line is we eliminate the need for human hands to touch the servers.  You can do it all remotely, and we make it possible to automate most things.  You can configure, instrument, and manage even the largest online services with a web browser.

Bob: (I’ve spoken to a number of people about the 3Tera system, and they all confirm how expensive and painful it is to setup and manage servers.  Jesse Robins over on O’Reilly Radar recently wrote a post called “Operations is a competitive advantage” that talks about exactly what 3Tera offers.) 

What kind of technology is behind these capabilities?

3Tera:  There’s quite a lot, as you can imagine.  Let’s take storage as just one example.  We decided up front that we needed to run entirely on commodity hardware because it keep’s our customer’s costs low.  There are no hardware SAN’s or NAS storage as a result—we work entirely off the local disks attached to the servers in the grid.

But, we also felt users needed redundancy and high-performance. The solution – we’ve written a virtual SAN that runs within the grid, controlling all the available storage out on the grid. Our volume manager runs on top of this and makes mirroring totally painless.  If you lose your disk or a server, access to data isn’t interrupted and we’ll mirror that same data again as well. 

People complain that Amazon has no persistent storage, but Amazon didn’t need to have persistence for their application, so you can’t blame them.  If you choose the same architecture as them, it works great, but if not, you have a lot more work to do.  The trouble is, all the apps we see need persistence for their databases, so we gave it to them.  We’re offering infrastructure that matches the common architectures everyone uses.

The other important foundation for the service is our component assembly technology. This is what allows AppLogic to capture infrastructure definitions using the whiteboard metaphor and package the application. More importantly, it’s what allows AppLogic to then convert that into running applications on the grid.

Bob: (I thought the virtual SAN was very cool.  It will be interesting to see how Amazon addresses the persistence problem.  There are several companies working on a solution for mySQL, but I suspect Amazon has their own internal efforts as well.  OTOH, Amazon’s S3 has a lot of fault tolerance that seems to go a bit beyond 3Tera’s out of box capabilities.  The truth is that a finished complex application will require a number of different capabilities in this area ranging from the immediate problem of keeping the database up to the problems of making sure their are good off-site backups and replication.)

Who are your competitors?

There really isn’t much of anyone unless you want to think of Amazon as a competitor.  We don’t, though, because most users are coming from collocation or managed service providers today.

Challenges for Utility Computing

What concerns do your prospects have when you first meet with them?

3Tera:  Their first concern is how long it will take to learn the system.  It turns out its really easy, and most users are up and running in a couple weeks.  Just to make sure customers know we’re going to make them succesful, we put together a program we call the Assured Success Plan.  It’s designed to take a customer from zero to full production in 2 to 4 weeks.  We charge $300/month for it.

Customers who sign up for the Assured Success Plan get a 1:1 relationship with an assigned 3Tera engineer.  They communicate via WebEx and teleconference.  Their first session is an orientation, and their homework is to just install some app on a grid.  The engineer and customer choose which app.

The second session, they go over the homework, and then they start talking about how to fit the customer’s app onto a grid.  By the third session, they’re ready to try to install the app.  The customer is asked to make a first go of it, and then the 3Tera engineer goes over how the customer did it and gives feedback on how to do better.

The fourth session is really interesting.  Here the customer tests failures and learns to deal with it.  It’s really easy with our management tools to simulate a failure anywhere in the grid.  So the customer practices failure management and recovery.  Most customers never get a chance to thoroughly test these failures when building out with physical servers because the time and cost is prohibitive, so we’re adding a lot of value going through the exercise.  The customer, meanwhile, is getting comfortable that they can manage the system when the chips are down.

Bob: (The assurance program sound like an excellent and very practical introduction and training for the system.  This is classic SaaS.  Why do a proof of concept if you can be up and running in the same time?)

Do customers stay paying for ongoing service?  How do you do Tech Support?

We made the Assured Success Plan cheap enough that we think customers will like to keep it running. After the initial consultations, it converts to pre-paid support.  At the same time, we offer support in units of 5 hours for $300.  Most customers buy at least a unit of support as part of their contract, and we make it easy for them to flex that up if they’re having an unusual amount of trouble.

What about costs?  How do customers get comfortable with what you cost?

First, understand, in many cases they can’t do it at all without a service like ours.  Our service is an enabler.  We make it possible to fund a data center pay-as-you-go on a credit card.  Even in Enterprises, we see a lot of customers go hosted, label it as a “proof of concept” to keep IT happy, but once things are running, they never go and move it to their own data center.

Second, our hosting provider partners are brutally efficient at cutting costs and passing those savings along.  They know they have to be competitive and they’re good at it.  We add a surcharge on top of that, but it’s offset by your savings in management overhead.  You don’t have to buy or write your own infrastructure software, and you can manage your grid with far fewer people.  One person part-time can run 100 servers easily.  A full time person could run probably 500 servers.  Do the math on what a person costs and you’re net/net way ahead.

Our most popular configuration is an 8 server.  It costs about $5000/month as a grid.  If you shopped very carefully, you might save 30% on that.  However, you’d have none of the management advantages we offer.  You’d spend that difference very rapidly either developing infrastructure to automate management with scripts, or hiring people to do it by hand.

Bob:  ( I asked one Director of Operations for a SaaS vendor what they thought of a 30% markup for this kind of service.  He laughed and said they would make that back in productivity savings so quickly that 30% would never be an issue to them.)

What’s a nightmare story you helped a customer with?

Imagine a largish bank.  What do you suppose their procedure and policy is to release code?  They need 1000 signatures over 9 months!  Most of the signatures involve internal manual processes.  They’d developed numerous layers of management to ensure stability and now they can’t get it done internally any other way. Because of this, many projects simply couldn’t get started. The cost wasn’t justifiable.  AppLogic eliminates many of those manual processes because the application is packaged as a unit. No one needs to touch the servers, the firewalls, or the load balancers. Plus, test applications can run in a hosted configuration, outside the corporate data center where there’s no interference with production systems. When they’re completed, they can be migrated inside with a single command. Thus, many of the issues with test deployments just don’t apply.

So this customer was able to move their project, which was dead in the water internally, into PoC. 

Next Installment

Next installment we’ll wrap up with a discussion of SaaS Sales and Marketing and Business Model thoughts from 3Tera.  Be sure to click the “subscribe” link at the top left of the blog page so  you don’t miss out on future posts!

Posted in data center, grid, platforms, saas, Web 2.0 | 1 Comment »

SaaS Limits Over-Engineering Too By Forcing Agile Development

Posted by Bob Warfield on October 23, 2007

When I read Phil Wainewright’s smoldering excoriation of SAP relative to SaaS competitors, I had to write another post today.  Wainewright compares the two after SAP announces they’ve had to can their SRM update and persuade people who aren’t already far along the adoption process to turn back to the 2005 version.  Whoa!  Back to 2 year old software.  Whatever could have happened? 

This is where the over-engineering monster rears its ugly head, and it certainly won’t be the first time for SAP (or other big Enterprise vendors).  Add to that a healthy dose of being afraid not to cover every possible check-off item lest another Big Enterprise like vendor sense weakness.  Meanwhile, SaaS competitors in this space just keep churning out new releases.

In the same way that the Unreasonable Men have argued that SaaS prevents too much customization to the good, I wonder if it doesn’t also prevent too much over-engineering.  Big Enterprise is used to taking so long to build their next release that it is almost completely incompatible with the last and will require an expensive update process to move on to.  This is not exclusively the domain of Big Enterprise either: we see Microsoft’s Vista laboring after 5 years to show that it really is better than Windows XP.  What’s the problem here?

Borrow a page from Agile software development.  The Agile approach is all about iterative development with short iteration cycles and lots of face to face contact with the “customers” who will use the software to verify that each iteration is never too far off the track.  Isn’t that exactly what happens in SaaS when it’s practiced properly?  Aren’t SaaS companies iterating on much shorter cycles and getting those cycles in front of all the users of their multitenant systems very quickly?  Don’t SaaS companies avoid these giant boil-the-ocean projects because it would be impossible to do gigantic updates to their whole customer base?

Phil Wainewright likens this whole process to the Innovator’s Dilemma, which I think is right, but he points to the complicity of the customers in an interesting way too.  Phil worries that big enterprise customers ladle on so many requirements that it is impossible for a new category to ever get mature.  New requirements are being added faster than the software can be written.

Here again, I think SaaS brings an enormous advantage.  Seeing something working today, solving real business problems, and delivering real ROI with minimal IT commitment helps business users push through initiatives without waiting for so much functionality to be amassed.  Customers know the product can come along quickly in the SaaS environment,  but just as importantly, they know they can have value today.  Not in 2 years when the package may be cancelled anyway.  Not in a year after an expensive customization process. Right now.

Perhaps SaaS is the moral equivalent of agile programming for Enterprise software.  It could certainly use it!

Related Article

Wainewright points out it isn’t just SAP:  Microsoft is doing the same thing with their Dynamics CRM offering, delaying it over a year.  Doh!  The new plan of record calls for releases every 2 years.  The delay is supposedly to retrofit .NET to the software?!??  How could a Microsoft product not have .NET to start with?  Decidely not Agile.

Posted in business, saas, strategy, Web 2.0 | Leave a Comment »

More on Microsoft’s Rift With the Web

Posted by Bob Warfield on October 23, 2007

My post on  Microsoft’s Rift With the Web generated some lively commentary including a blog post from Tim Anderson and a similar-sounding comment from a confessed Microsoft employee.  The problem I have with both is they’ve painted my original post with the same brush that has gotten Microsoft into trouble.  There seems to be a tendency to want to make it a totally black or white conclusion.  Here are some choice quotes:

“Microsoft had little-to-nothing to do with Java’s demise; and Java is not “the web”.  Java was rejected by the web because it was antithetical to the web.”

Wow!  Who knew Java had met it’s “demise” and had been “rejected by the web”?  Why wasn’t this on TechMeme?

“since enterprises did not want to bet on a single language/runtime that was controlled by a single vendor. Again, MSFT gave people a choice; you can’t blame us for them choosing open standards and interop.”

There is giving a choice, and there is forcing people to make a choice.  Microsoft favors the latter strategy because if you invest totally in their choice they lock you up.  They’ll play for the long term assuming they can add to that locked in base over time.  But having a choice is much different than forcing a choice.  Forcing a choice is black or white.  Having a choice is, “I want both, I want gray.”  Lots of folks out there are mixing lots of the open web tools together including Java.  Most of these tools are built for it.  I know very very few that mix .NET with that world.  Shops tend to go all .NET.  The exception would be for example SQL Server, but that’s a pretty weak counter example.

“MSFT was a good-faith promoter and participant in the relevant W3C standards sinc day 1.”

My recollection is different, but forget recollections.  Let’s look at the present and Microsoft’s standards-based activities around OOXML.  Apparently, Microsoft tried to stack the committee to get their way, which brought progress to a grinding halt and had Tim O’Reilly writing:  This is a great victory for advocates of openness.  Now is this any way to behave as a good-faith promoter? 

“Warfield does not quite say, but strongly implies, that .NET is failing in the market.”

Failure is a strong word, and not one I would choose.  What I did say is that Microsoft is making themselves an island, and on that, the writer evidently agrees:

“Now, I do partially agree with Warfield. Microsoft is an island…”

 .NET hasn’t failed, but Microsoft’s insistence on .NET to the exclusion of the other platforms is what’s placed it on an island.  The web at large is actually a pretty forgiving place, notwithstanding the frequent flame wars one sees.  The latter are just good-natured barroom fisticuffs.  But the web is intolerant of too much dogma, and in particular, it is intolerant of walled gardens, which is exactly what Microsoft wants from .NET.  You can’t blame Microsoft for trying to create a walled garden, only for misunderstanding the ramifications of doing so in a world that is well aware of what walled gardens are and what they mean.  By insisting, they’ve been banished to this island.

Tim Anderson (and others) dislike my use of Google results to bring perspective.  He presented some interesting data that shows IIS gaining on Apache.  The trouble is, if we read the fine print on that page, netcraft conclude that the apparent improvement in IIS relative to Apache is due to the fastest growing sites being MySpace, Microsoft Live.com, and Google’s Blogger.  MySpace and Microsoft Live both run IIS.  But is the number of servers in use a meaningful metric to this discussion?  No, not really.  The fact the gap closed because of just 3 sites, one of them Microsoft’s own site, only emphasizes my point even more strongly.  The island has a couple of great big peaks out there and one is home to Microsoft itself, which is surely another source of skew, and a worse one from my perspective if you’re talking about community and developer mindshare.

Tim makes a couple of other comments on my Google statistics:

“so by the same logic PHP is vastly more important than Java.”

I can live with that, and in fact, I’ve written about it.  I’m not sure why it would be controversial.  It doesn’t mean there aren’t a whole lot of folks who love Java, nor that Java isn’t a far better tool for many uses.  But PHP is a great tool too, and the world has proven that a number of times now.  The Lamp stack has become extremely successful among web startups.

Here is another:

“we must conclude that MySQL is far more important in the Enterprise than Oracle”

Tim has gold plated his case by inserting the word “Enterprise” there.  Not a word I used in my analysis.  In fact, it seems to me we were talking about the web, not the Enterprise.  Turnabout would be for me to say, “Tim Anderson says Oracle is much more important to the web than mySQL.”  In fact, if we add the word “enterprise” to the Google search, we get the following:

Oracle+enterprise:  10,500,000

mySQL+enterprise:  1,940,000

Oracle beats mySQL 5:1 in the Enterprise by this metric.  Tim, are you sure you don’t like this Google diving approach?  It seems to agree with your points of view.

Last point of clarification involves this statement:

“Should Microsoft drop .NET and embrace Java or PHP, as Warfield kind-of implies?”

We are again attempting to force a black and white choice instead of offering choices.  My exact recommendation to Microsoft was this:

“Microsoft needs to repair their rift and start embracing some of the other technologies out there. “

Frankly, things were pretty good before Microsoft went to war over Java.  I’m not interested in eliminating Microsoft or .NET!  I want to seal the rift because a bigger, more unified ecosysem will be good for all concerned.  I read where Sun is now increasingly using Python in lieu of Java  on apps they are shipping.  Sun has “cultivated and vigorously supported” Ruby.  When will we read something similar to either announcement from Microsoft, instead of reading things like they’re going to quit shipping the JVM at the end of the year?

It’s really not that hard.  It won’t hurt .NET, honest, and it will improve Microsoft’s ability to engage the ecosystem that is the web.  It means giving up on having a monopoly on web development, but come on, that just isn’t going to happen.  Move on.

Or, to paraphrase a famous Cold War era quote:

Mr. Ballmer, tear down this wall! 

(And stop the monopoly building foolishness: it won’t work, it’s hurting you, and it’s making your enemies grow stronger.)

Posted in business, strategy | 12 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 323 other followers

%d bloggers like this: