SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for September, 2007

This Facebook Deal is Too Big! This Parakey Deal is Too Small! Maybe They’re Both Just Right…

Posted by Bob Warfield on September 25, 2007

Everyone needs to be careful what they wish for, lest they get it.  There’s a buzz in the blogosphere over two deals, one that many think is way too big and one that seems to raise the ire as being way too small.  Ironically, both involve Facebook.  You know things are a bit frothy when you can simultaneously generate many TechMeme hits around the same property, and when the hits are not that interesting.  Although what passes for popular gossip is often uninteresting.

Let’s start with the deal that was too small.  Michael Arrington tells us that Parakey’s investors got a bad deal because they sold the company to Facebook for chump change of “less than $4M”, and they didn’t even get shares, they got cash.  Meanwhile the founders of Parakey got “handsome stock options”.  Apparently the invested got a 2x return after 6 months.  I’ve no doubt the investors were disappointed at the time, but their disappointment at the present is completely irrelevant.

Why?  Play it back without the 20/20 hindsight.  The investors accepted a 2x return after 6 months.  That is well what they need to get to make their numbers.  Check out Will Price’s excellent post to understand why VC’s want 10x multiples.  Worse, they want 10x after a couple of years when it’s 10x on several rounds, not 10x after a few months on very little capital invested.  Now granted there were angels involved, but mighty Sequoia was also involved.  Let me tell you, those guys are smart.  They did not get taken advantage of here, at least not in their minds at the time the deal was struck.

That’s not to say Sequoia was wholly enamored of the deal, only the insiders know for sure whether:

- Sequoia had lost confidence/interest in Parakey.

- Parakey’s founders told their investors they were moving on to Facebook in any event.  The acquisition purely salvaged some value for the investors.

- Investors and Founders became convinced Facebook would crush Parakey if they didn’t give in to being acquired.  Convincing the acquiree of such an eventuality is very typical in tech M&A situations.

As for the founders getting “handsome stock options”, hello?  They got a job.  Jobs in Tech carry stock options.  Yes, those options appear to be worth an incredible fortune now that Microsoft is tickling the value of Facebook, but who knew back then.  The options still have to finish vesting, and Facebook has to generate a liquidity event before they’re worth anything. 

I’ve been involved in a lot of M&A activity in the software world, and deals like this are pretty common.  At Borland, we used to look for companies that had great technology but failed marketing.  The usual formula was to pay the investors similarly to Parakey and give the employees jobs.  Most of the time, everyone was happy about it because great technologies found a real marketing channel where they could finally succeed.

In short, it’s silly to kvetch about the Parakey deal.  This is why Wall Steet says we ought not to look back on stock trades.  We all have a list of those things we should have done.  So what?

The more interesting chess game is Microsoft’s rumored offer to invest in Facebook at an astounding valuation.  Apparently they’d like to acquire a 5% stake and are willing to give up circa $500M for it.  This churns up a lot of questions:  Why would Microsoft want this?  Why would Facebook want it?  Is Facebook worth $10B, let alone the $15B they’re rumored to be asking for as a counter?

Let’s start with Microsoft.  First, they need to find ways to stop the Google freight train from taking over the Internet completely while they get their act together.  Remember, Microsoft historically needs 3 releases for that to happen.  I’ve lost track of which release of their Internet strategy they’re on, but the Internet is not a product, so may require even more releases.  I would count Microsoft’s first Internet release as the browser wars, wherein they thought of the Internet as a product and fought Netscape hard to win the browser wars.  Now they have the leading browser but its entirely unclear how that monetizes.  But we digress. 

Facebook is the hottest available property right now, so making a 5% investment would give them some ability to interfere with Google taking it over while Microsoft figures out what to do.  Remember, they still have $21B of cash in the bank and generate a staggering $17B of cash flow each year.  If you had to deploy the astounding cash flows at Microsoft (and they dont’), $500M represents about a week and a half out of the year.  How many properties are as interesting to spend it on as Facebook when looked at as a week and a half?

GOOG does okay too, but Microsoft will win a cash bidding war.  Yahoo, meanwhile, is completely out of the game on these kinds of numbers.  It’s hard to imagine this had anything to do with slowing Yahoo down.

What’s the Microsoft downside? 

  • They can potentially make a return on a $500M investment or not.  They don’t care.  Look at their cash situation again.  Microsoft practically has to bury cash to keep investors from shaking them down to distribute it as bigger dividends.  They are almost too profitable.
  • They can end up buying Facebook.  If it looks right, why not?  I bet they wait until some catalyzing even takes place though.  Meanwhile, 5% may give them visibility inside Facebook they wouldn’t otherwise have had.
  • They can keep others from buying Facebook.  That’s the nightmare scenario, that a competitor + Facebook can lock Microsoft out of further Internet growth.  $500M is cheap insurance to avoid that.

Okay, we can sort of see how it makes sense to Microsoft.  It boils down to the rich having so much money that prices don’t matter versus aspirations and strategies.

On to Facebook.  Their aspiration is likely to go public.  They want to be the next Google.  This deal is great for that.  They get a bunch of fresh capital to fund further expansion–you can buy a lot of Parakeys to launch into your channel for that!  They get a head start setting a huge valuation on their company for the public markets.  Will Microsoft interfere with their going public?  I would think not.  Seems like its easier for Microsoft to acquire them after they are public, all things depending of course.  It certainly would have been harder for Larry Ellison to buy PeopleSoft had they been private.

This brings us to our next question:  is Facebook really worth $10B?  Very unclear from this transaction.  Microsoft has a vested interest in driving the price up.  Indeed, at one point Oracle told SAP and others they could and would pay crazy prices for acqusitions.   The answer to the real value of Facebook will have little to do with mundane issues, however.  It has to do with predicting the future.  If Facebook continues for long enough on their current meteoric trajectory, they will indeed be worth $10B or even more.  Waiting a little longer to see is another good reason for a company like Microsoft to invest in 5% and not the whole enchilada.  Growth curves can often flatten out when you least expect it.

Fred Wilson says this isn’t about setting a real price at all.  It’s about Facebook selecting a strategic partner and charging them a premium for it.  Fred sees it as similar to moves AOL made.  He’s right.  It is a strategic partnering opportunity, and from that perspective, Microsoft looks a better/safer partner than Google, who wants to build a competing network.  The strategic partner angle is also why Apple makes a bad partner, despite what Scoble thinks.  Yes, they are the l33t style leaders, and they have a vocal community, but it is largely an island.  Such a partnership offers more to Apple than Facebook.

Om Malik gets it with his “put option” concept, but I think he’s wrong to say Microsoft will never sell advertising anywhere else.  Where was the advertising going anyway?  Smaller players will always want to play with grown-up Microsoft.  Most of the bigger players are choosing up sides against each other.  Here Microsoft is at least gaining a dance partner.

Getting back to what you wish for, I’d say all parties are getting what they wished for at the time of the transaction on both deals.  Be careful what you wish for…

Posted in business, strategy, Web 2.0 | 3 Comments »

Halo 3′s Sophisticated Game Movie Capabilities are Key to Social Networking the Game

Posted by Bob Warfield on September 25, 2007

Josh Catone has an interesting post on Read/Write Web about the sophisticated game movie creation features of Halo 3.  Basically, you can capture video during game play and view it later.  But, it’s more interesting.  Since Halo is a simulated 3D world, you can change your camera position after the fact, so it’s really recording game state.  The other angle Josh focuses on is a file sharing network Halo’s creators provide for sharing this sort of thing.

Commenters quickly point out to Josh that this is not anything new–you could create videos for Quake 10 years ago (albeit not with the changing camera angle) and in addition, there are many communities sharing such videos for various games. 

The interesting thing about this is to step above the noise of who invented what when and consider what this means and why it makes sense to do it.  Yes, one could argue that since other games have done it the Halo guys just build it in, but it’s because other games have done it and it’s been a popular feature that wins the day.  Why is it popular?  Because it enables social networking for the gamers.

Think about it.  People who play these games love the rich 3D first person shooter (or driver or whatever) experience.  Virtual reality.  If you hand them some lame textual forum, it only takes them so far.  They want richer media.  They want to extend the game experience into the social network.

The moral?  If you’re trying to add Social Networking to another online experience, make sure you capture the personality style and media choices of the participants in the other experience.  For a systematic way to think about this sort of thing, check out my Web 2.0 Personality Style posts.

Posted in user interface, Web 2.0 | 3 Comments »

The IT Appliance World Has Multicore Problems Too

Posted by Bob Warfield on September 25, 2007

Rational Security has a great post on the challenges of multicore for vendors of security boxes.  You’ll recall that the Multicore Crisis comes about because the impact of Moore’s Law on microprocessor performance has changed.  We no longer get a doubling of clock speed every 18 months, instead we double the number of cores.  The problem with this, and the reason its a crisis, is that conventional software isn’t designed to make use of more cores.

There are a heck of a lot of different IT Appliances out there performing every imaginable task.  Cisco has made a tremendous business as have its competitors selling a variety of them.  Inside the black box one often finds a computer running software.  It should come as no surprise that:

  1. Performance and throughput matters to these black boxes.
  2. Since they’re just computers, they can run afoul of the multicore crisis too.

Apparently the majority of these boxes are built on software that’s at least 5 years old.

It seems the multicore crisis will find its way to every corner of the computing ecosystem before too long.

Posted in saas | Leave a Comment »

IDC Says Software as a Service Threatens Partner Revenue and Profit Streams

Posted by Bob Warfield on September 25, 2007

Welcome to the party, IDCI’ve been saying that SaaS is toxic to old school ISV partners for some time now.

The problem is threefold.  First, reselling SaaS is problematic–the partner would receive only a small portion of the monthly revenue streams which is much less than they’re used to when they resell perpetual licenses.  To make matters worse, SaaS already has a difficult time making a profit in many cases so there isn’t a lot extra to go around for partner/resellers.  Second, the Services piece of the pie is increasingly bundled into SaaS.  In fact SaaS without Service is just hosted software, an entirely different and much less successful beast.  Third, SaaS vendors have largely made it difficult for partners to create value added IP around their offerings.  Most SaaS offerings have no way to tie other software into them, and the SaaS vendor has worked hard to eliminate as much customization as possible.  What’s a poor partner to do?

For starters, partners should be working with SaaS ISV’s in their space to hammer out areas they can add value.  And the ISV’s should be listening.  These same partners are frequently going to be in customer accounts attempting to sway the customer one way or the other.  If you’ve totally alienated the key partners in your vertical, they’re not going to be pushing hard for your solution.

What sort of discussion can be had between would-be partners and SaaS ISV’s at one of these sit downs?  The key question is to identify what the distinctive IP for the partner is going to be and how they can participate and gain access to the SaaS vendor’s customer base.  Distinctive IP can follow a variety of forms:

-  Best practices within the space.  These partners often know as much or more than the ISV itself about the best practices.  The SaaS vendor can facilitate the partner’s ability to peddle those wares by providing access to the vendor’s community.

-  Adjacent modules.  Inevitably, some of these the SaaS vendor will have earmarked for themselves.  But others are things the SaaS vendor will never get to.  Having a discussion about how to deliver such functionality can lead to API’s on the SaaS product that benefit everyone.  These API’s need not be complex.  SOA is the watchword for SaaS.

-  Integration with other vendor’s software.  Inevitably, there are interesting integrations to be performed.  Once again, the SaaS vendor can play a role in opening up their system to make it easier for their partners to perform such integrations.

In addition to enabling, partners need the usual care and feeding they’ve always been after.  They want to feel like they occupy a special position with the ISV.  They want briefings and training and sales leads.  Put yourself in their position; this is all pretty basic stuff to you, but it’s life or death in many cases for a services partner. 

In exchange for supporting their partner’s activities in building new businesses, the SaaS vendor can be rewarded with a vibrant ecosystem around their product.  They pick up more advocates out in the world and hopefully the partners will be helpful in delivering more sales leads as well as helping to close the leads that are already out there.

A SaaS vendor that doesn’t provide a viable ecosystem for their space’s partners runs the risk that someone else will.  Remember too that most people operate tit-for-tat in partnerships.  Start out treating them well, and they treat you well.  Do them wrong and they stop treating you well immediately.

Thanks to the excellent SaaSWeek blog for bringing this IDC story to my attention.

Posted in saas | 2 Comments »

Ning has 100,000 Social Networks: What Does It Mean???

Posted by Bob Warfield on September 24, 2007

Ning now has 100,000 Social Networks.  If you haven’t heard of it, Ning is a service that let’s you rapidly create an entire Social Network.  All you need to add is content and friends.  So that means that people have created over 100,000 individual Social Networks on Ning since the company was founded in 2004.  It’s taken them a little while to find a product/market fit, but they really seem to have hit their stride.  This is a relatively recent development as 70K of the 100K networks have been created in the last 7 months and the trend is your typical exponential growth curve.  One of the developments that contributed to massive late stage growth was a new release that eliminated the last vestiges of needing to do any programming to create a new Social Network.

Andreesen says they think of networks as falling into 3 categories:  big, long tail (small), and throwaway.  Moreover, he says it would be a mistake to assume all the action is in the top 200 or so networks.  The top 200 get less than half the traffic.  This is one of those times where understanding averages doesn’t tell us much, even with Marc giving slightly more data.  Do the top 200 get less than half the traffic because nearly all of the networks are relatively small, for example?  Marc Canter (a competitor) wonders about this sort of thing.  Yet it can’t be that all the networks are small because pmarca says there are some with tens of thousands of users.  There’s a little more color on this in the ning blog.

My first reaction on reading this was to wonder what the heck people could be doing with 100,000 Social Networks?  I read elsewhere that folks who are otherwise huge cheerleaders for Social Networking are now complaining of Social Network Fatigue–the effort required to participate in a new social network is just too hard.  Marc Andreesen’s post about passing 100K networks makes some interesting points:

  • Sometimes its good to have a disposable Social Network.  It’s used for one purpose, such as fund raising for a particular cause or experimentation, and then it’s discarded.  That makes a whole lot of sense to me.
  • He says Ning has a double viral loop that consists of both inviting folks into Ning and then into a particular Social Network on Ning.
  • He says the explosive growth of Ning has also coincided with Facebook’s growth.  He attributes the correspondence to heightened overall interest in Social Networking.

To the list of observations I gleaned from Marc, I would add that a traditional “Big” Social Network, like a Facebook, let’s you direct content to your Social Network and receive content from your friend’s Social Network.  You are the hub there.  Creating a special purpose Social Network eliminates that hub function.  It enables Social Networking group-to-group when one person doesn’t make a good hub.  A great example is the Navy Wives network on Ning.  When I saw this network it immediately made sense.  What a great idea for Navy Wives to be able to get together in a social network.  They have a lot of things in common, they’re scattered all over the world, and there is no logical hub for them to congregate around.  According to the Ning site, they have 242 members and growing.  Given the size of our Navy, there’s lost of upward growth possible.

Mind you, not everyone wants group-to-group networking, but you can see where it makes a lot of sense if you have a group, club, church, organization, product, or other entity that wants its own branded Social Network as a presence on the web.  Here were some other interesting Ning Networks I came across:

  • Diddlyi Dance:  A Network for Irish folk dancers.
  • Ning Network Creators:  A Network for people who make Networks in Ning.  Kind of interesting to compare and contrast what goes on here with what’s on the Ning Blog.  Gives you an idea when to use a Network versus a blog.
  • Wakefirst:  A cool network for wakeboard enthusiasts.  These folks have pulled out a lot of stops to make a cool network that shows what you can do with Ning.
  • Craftbeer:  For home and microbrewers.  aka Cure for what ales you.
  • DIY Drones:  Build a remotely piloted vehicle.

Predictably, Ning has also rolled out the capability for Ning users to create Facebook apps to go along with their Ning Networks.  That’s a smart move because it lets Ning suction off traffic from Facebook over to the world of Ning.  In fact, it’s one of the 7 Guerrilla Platform Tactics I recently wrote about. 

Can Ning make money?  Time will tell.  I must say that to the extent people form Networks for a particular purpose, it does provide a targeted audience for advertisers that might prove valuable.  Failing that, Ning offers Premium Services at an extra charge, so they’re a SaaS Social Networking Platform.

Overall, I think Ning is pretty cool.  If I can think of a worthwile Network to create, I imagine I’ll try it out. 

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in platforms, saas, Web 2.0 | 3 Comments »

Congratulations Mozy guys, you’re rich! Now where the *&%@? is my data?

Posted by Bob Warfield on September 23, 2007

The news is out that Mozy has been acquired by EMC for $76M.  That’s a staggering multiple on the mere $1.9M in capital they raised.

I’m a Mozy user, but not a happy one.  I had been using Mozy on the recommendation of a friend.  Many others I know use it too.  The reason I’m not happy is there is using it and then there is USING it.  Let me explain.  I suffered a loss of data, had been backing up with Mozy, and wanted to restore the lost data.  My attempts were wholly unsuccessful for the most annoying of reasons.  I would log into Mozy to recover said data, select “Restore Files” from the menu, and then I’d never hear back from Mozy again.  It would say “Loading Backup Information” until the cows come home.  I left it running for a couple of days to no avail.  I’m not the only one with this problem.  I gave up on them about a month ago and set about recreating what I’d lost.  In the end, the most annoying loss was my large Outlook.pst file.  The rest was pretty easily recovered, not the least of which was because I’ve started using a lot of SaaS-style web applications.

Just for kicks, I tried again as I’m writing this article.  I gave it half an hour of looking at the “Loading” prompt before shutting it down.  You wonder if these EMC guys did any diligence before buying Mozy?  Perhaps they just don’t care.  Presumably EMC has the wherewithal to make whatever is wrong right.  But there has to be a certain amount of negativity out there.  OTOH, it’s a classic case of an app that appeared to work for a long time when in fact it was badly broken.  So long as you didn’t actually need your files, the backup worked fine.  Write only memory is not the answer for backing up data, no matter how smoothly it seems to work!  I’m still looking for an offsite backup alternative I like. 

The morals of this story:

-  Test restoration of a backup.  Yes, I know, the IT guys are snickering.  They knew that.  Apparently many of them still don’t “do as they say!”

-  RAID arrays can’t be read by different motherboards.  How stupid is that?  You have to go out on eBay to find the same board to read your RAID array.  Doh!  Someone ought to fix that silly gaff.

-  $1.9M is not enough capital to start a company like this.  Had EMC not come along, these guys were in trouble.

Congratulations Mozy guys, you’re rich!  Now where the *&%@? is my data?

Posted in Marketing, saas | 5 Comments »

Memcached: When You Absolutely Positively Have to Get It To Scale the Next Day

Posted by Bob Warfield on September 23, 2007

Why should “Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0″ care about a piece of technology like memcached?  Because it just might save your bacon, that’s why. 

Suppose your team have just finished working feverishly to implement “Virus-a-Go-Go”, your new Facebook widget that is guaranteed to soar to the top of the charts.  You launch it, and sure enough you were right.  Zillions of hits are suddenly raining down on your new widget. 

But you were also wrong.  You woefully underestimated in your architecture what it would take to handle the traffic that is now beating your pitiful little servers into oblivion.  Angry email is flowing so fast it threatens to overwhelm your mail servers.  Worse, someone has published your phone number in a blog post, and now it rings continuously.  Meanwhile, your software developers are telling you that your problem is in your use of the database (seems to be the problem so much of the time, doesn’t it?), and the architecture is inherently not scalable.  Translation:  they want a lot of time to make things faster, time you don’t have.  What to do, what to do?

With appologies to Federal Express, the point of my title is that memcached may be one of the fastest things you can retrofit to your software to make it scale.  Memcached when properly used has the potential to increase performance by hundreds or sometimes even thousands of times.  I don’t know if you can quite manage it overnight, but desperate times call for desperate measures.  Hopefully next time you are headed for trouble, you’ll start out with a memcached architecture in advance and buy yourself more time before you hit a scaling crunch.  Meanwhile, let me tell you more about this wonder drug, memcached.

What is memcached? 

Simply put, memcached sits between your database and whatever is generating too many queries on it and attempts to avoid repetitious queries by caching the answer in memory.  If you ask it for something it already knows, it retrieves it very quickly from memory.  If you ask it for something it doesn’t know, it gets it from the database, copies it into the cache for future reference and hands it over.  Someone once said, it doesn’t matter how smart you are, if the other guy already knows the answer to a hard question, he can blurt it out before you can figure it out.  And this is exactly what memcached does for you. 

The beautiful thing about memcached is that it can usually be added to your software without huge structural changes being necessary.  It sits as a relatively transparent layer that does the same thing your software has always done, but just a whole lot faster.  Most of the big sites use memcached to good effect.  Facebook, for example, uses 200 quad core machines that each have 16GB of RAM to create a 3 Terabyte memcached that apparently has a 99% hit rate.

Here’s another beautiful thought: memcached gives you a way to leverage lots of machines easily instead of rewriting your software to eliminate the scalability bottleneck.  You can run it on every spare machine you can lay hands on, and it soaks up available memory on those machines to get smarter and smarter and faster and faster.  Cool!  Think of it as a short term bandaid to help you overcome your own personal Multicore Crisis.

What’s Needed

Getting the memcached software is easy.  The next step I would take is to go read some case studies from folks using tools similar to yours and see how they’ve integrated memcache into their architectural fabric.  Next, you need to look for the right place to apply leverage with memcache within your application.  Memcached takes a string to use as a key, and returns a result associated with the key.  Some possibilities for keys include:

-  A sql query string

-  A name that makes sense:  <SocialNetID>.<List of Friends>

The point is that you are giving memcached a way to identify the result you are looking for.  Some keys are better than others–think about your choice of key carefully!

Now you can insert calls to memcached into your code in strategic places and it will begin to search the cache.  You’ll also need to handle the case where the cache has no entry by detecting it and telling memcached to add the missing entry.  Be careful not to create a race condition!  This happens when multiple hits on the cache (you’re using a cache because you expect multiple hits, right?) cause several processes to compete with who gets to put the answer into memcached.  There are straightforward solutions available, so don’t panic.

Last step?  You need to know when the answer is no longer valid and be prepared to tell memcached about it so it doesn’t give you back an answer that’s out of date.  This can sometimes be the hardest part, and giving careful thought to what sort of key you’re using is important to making this step easier.  Think carefully about how often its worth updating the cache too.  Sometimes availability is more important than consistency.  The more you update, the fewer hits on the cache will be made between updates, which will slow down your system.  Sometimes things don’t have to be right all the time.  For example, do you really need to be able to lookup which of your friends are online at the moment and be right every millisecond?  Perhaps it is good enough to be right every two minutes.  It’s certainly much easier to cache something changing on a two minute interval.

Memcached is Not Just for DB Scaling

The DB is often the heart of your scalability problem, but memcached can be used to cache all sorts of things.  An alternate example would be to cache some computation that is expensive, must be performed fairly often, and whose answer doesn’t change nearly as often as it is asked for.   Sessions can also be an excellent candidate for storage in memcached although strictly speaking, this is typically more DB caching.

Downsides to memcached

  • Memcached is cache hit frequency dependant.  So long as your application’s usage patterns are such that a given entry in the cache gets hit multiple times, you’re good.  But a cache won’t help if every access is completely different.  In fact, it will hurt, because you pay the cost to look in the cache and fail before you can go get the value from the DB.  Because of this, you will need to verify that what you’re caching actually has this behaviour.  If it doesn’t, you’ll need to think of another solution.
  • Memcached is not secure by itself, so it must either be deployed inside your firewall or you’ll need to spend time building layers around it to make it secure.
  • Memcached needs memory.  The more the merrier.  Remember that Facebook is using 16GB machines.  This is not such a happy story for something like Amazon EC2 at the moment, where individual nodes get very little memory.  I have heard Amazon will be making 3 announcements by end of year to help DB users of EC2.  Perhaps one of these will involve more memory for more $$$ on an EC2 instance.  That would help both the DB and your chances of running memcached on EC2.  There are other problems with memcached on EC2 as well, such as a need to use it with consistent hashing to deal with machines coming and going, and the question of latency if all the servers are not in the EC2 cloud. 
  • Memcached is slower than local caches such as APC cache since it is distributed.  However, it has the potential to store a lot more objects since it can harness many machines.  Consider whether your application benefits from a really big cache, or whether some of the objects aren’t better off with smaller, higher performance, local caches.

Alternatives to memcached

  • Other caching mechanisms are available.  Be sure you understand the tradeoffs, and don’t take someone else’s benchmarks for granted.  You need to run your own! 

  • Static Web Pages.  Sometimes you can pre-render dynamic behaviour for a web page and cache the static web pages instead.  If it works out, it should be faster than memcached which only caches a portion of work required to render the page, usually just the DB portion.  However, rendering the whole page is a lot of work, so you probably only want to consider it if the page will be hit a lot and changed very little.  I’ve seen tag landing pages (e.g. show me all the links associated with a particular tag) done this way and only updated once a day or a couple of times a day.  You may also not have a convenient way to do static pages depending on how your application works.  The good news is static pages work great with Amazon S3, so you have a wonderful infrastructure on which to build a static page cache.

Conclusion

Memcached would seem to be an absolutely essential tool for scaling web software, but I noticed something funny while researching this article.  Scaling gets 765,013 hits on Google Blog Search.  Multicore (related to scaling) gets 88,960 hits.  Memcached only gets 15,062.  I can only hope that means most people already know about it and find the concepts easy enough to grasp that they don’t consider it worth writing about.  If only 1 in  50 of those writing about scaling know about memcached, its time more learned.

Go forward and scale out using memcached, but remember that its a tactical weapon.  You should also be thinking about how to take the next step, which is breaking down your DB architecture so it scales more easily using techniques like sharding, partitioning, and federated architectures.  This will let you apply multiple cores to the DB problem.  I’ll write more about that in a later post.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in multicore, saas, software development, strategy, Web 2.0 | 11 Comments »

Social Graph or Network? Or Social Time Waster?

Posted by Bob Warfield on September 22, 2007

There’s a massive amount of traffic in the blogosphere about whether we should be referring to Social Graphs or Social Networks.  Over the weekend just the blogs I follow that commented on this included:

Dave Winer:  Who started this particular go round and insists you sound like a monkey if you call it a Social Graph.

Josh Catone over at R/W Web:  Agrees he found it all very confusion when the term Social Graph came up.

Nick Carr:  Was similarly confused until someone straightened it out for him.

Scoble:  Disagrees with Winer et al, the Social Graph and Social Network are two different things to him, thank you very much.  I don’t think he liked being told he sounds like a monkey, but then, who would?

Alen Patrick:  Places a vote against Social Graph because “it irritated him” when he first came aross it.

Dave McClure:  He of the 500 hats says he agrees with Winer, but puts a thumb in his eye saying RSS = XML. 

Talk about a tempest in a teacup!  Is it just a slow news day, or what?  People, get a life!

Posted in saas | Leave a Comment »

Adobe Flex is an Awesome Prototyping Tool

Posted by Bob Warfield on September 22, 2007

I talked to someone the other day who built a complete mock up of a complex enterprise app’s proposed UI facelift in 2 days.  They were just as surprised as I was, and have since become addicted to using Adobe’s Flex for UI prototyping.  It’s really easy to use Flex in this fashion, much easier than trying to do something similar with AJAX. 

UI prototyping is essential for good UI development in my book.  You need to get the proposed UI in front of real customers ASAP, well before you can hope to have real code behind it in most cases.  Storyboards and the like are handy, but there’s nothing quite like an interactive UI prototype that at least navigates to the correct screens if nothing else.  I also like the idea of being to build an app from “both sides”.  The UI gang can get a prototype doing and start fleshing it out towards the engine.  The engine can can start building towards to the UI, and they’re able to see what UI functions are contemplated up front.  The two meet in the middle and you have alpha quality code for a product.  If I’m planning to support both a RIA and a “dumb” thin client, I’d rather dumb down the RIA than try to “smarten up” the dumb thin client as well, which argues to start from the RIA in any case.

Developers often like to start out with a real web page too, rather than a Photoshop illustration or Visio diagram or some such.  It only makes sense.  The task of trying to convert a Photoshop or other drawing into code is drudgery anyway, so why not just start with something that’s nearly ready for incorporation into the app?  A UI prototype also gives QA something more substantitve to start thinking about test plans around.

Go ahead, give Flex a taste for this purpose.  It’s easy to learn at this level.  Here are some resources to help you get started:

Adobe Flex Home Page:  There’s a 30 day trial offer here: that’s your ticket to play with Flex.

Flex Component Explorer:  An easy way to see all the gadgets available in vanilla Flex 2.

Flex 2 Style Explorer:  And you can see how to tweak their styles.

Posted in saas | Leave a Comment »

Viral Blogging 2: More Ideas

Posted by Bob Warfield on September 22, 2007

I couldn’t agree more with Chris Brown when she says that the puzzle facing companies who want to use Web 2.0 for marketing is solved when they start thinking of how to collaborate with the people they’re trying to reach.  This is true for outwardly facing marketing applications for Web 2.0, and its also true for inwardly facing Web 2.0 solutions that try to improve productivity or morale within the organization.

In part 1 of this Viral Blogging Series, we talked about some simple features that could be added to blogging software to promote more linking between blogs.  It basically boiled down to some dedicated blog search features to make it quick and easy to find articles related to a post you are drafting and link to them.  Let’s continue to brainstorm some ideas together (and I do want to collaborate on this, so please use the comments to add your own ideas to this list!) for collaboration using Web 2.0.  Collaboration is, after all, the “viral” aspect here.  If folks don’t link to your blog articles or comment on them (or you to theirs), we don’t have that collaborative frenzy going.

Given that we have the “Find Related” button to help us identify related blogs, the next thing I’d propose is the ability to automatically track and interact with related writings that come out on your article.  Wouldn’t it be great to have a little feed called “Related Articles” that you could add to the bottom of a post?  You may have seen me do this manually from time to time, but it’s way to time consuming to keep up with it by hand.  And yet, the life of a blog post is just beginning when we first put it out there.  Yes, there are trackbacks and comments, but often there are also other articles that are related that don’t have any linkages.  I want to make easy to find those and designate them for the “Related Articles” that appear below your post.  Perhaps you have to do a bit of editorial work to select which articles should appear, but that’s okay, you don’t want to just spam a bunch of search results up there.

Now what about making it easy for other non-blog platforms to tie into your article?  There’s a whole raft of these, and you can get various plug-ins to help out, but it makes more sense to have one single widget provided by the blogging platform vendor that covers all the bases.  The blog owner should be able to customize which of these platforms appear.  For example, I often show del.icio.us, Stumble Upon, and Digg.  Someone else might like Reddit.  These are all good sources of traffic and community and it should be made easy to tie into these.  They also provide more ways for people who like to participate (collaborators!) to use the tools they’re used to.

This brings us to a huge point that’s being made by Fred Wilson:  your social graph is not necessarily inside a Social Network.  Your email has a social graph and so does your blog.  Blogging platform guys, you need to enable this!  Xobni shows the way for email.  What does it mean for blogging?  Well, we can certainly extrapolate Xobni very literally and see some interesting possibilities.  If nothing else, I want to see my Social Graph in a meaningful way:

  • Who regularly links to my blog?  Who used to link but hasn’t in a while?  Can I add these guys with one click to Google Reader?
  • Same with comments.  Who comments on my blog, do they still comment, do they have a blog, can a start reading it?
  • What Love have I shown these folks?  Have I reciprocated by linking back to their blogs?  Have I commented on their blogs? 

Now what about tying into larger communities?  Having identified the Social Graph associated with my blog, I want to:

  • Invite the folks in the Graph to join other communities I’m a member of:  LinkedIn, Facebook, and MySpace.  After all, we’re interested in the same stuff, we’re conversing on our blogs, let’s get involved elsewhere too!
  • I want to pump my blog’s content into these other communities as desired.  Perhaps just News Feeds announcing when I make a new post.  But it has to be super automatic and easy.

What else can we do?  Well, the world of widgets comes to our aid here.  In short, we can try to engage the audience more fully.  I see two easy avenues worth trying.  First, we may try to appeal to more of the Web 2.0 Personality Styles.  We have to get away from just medium-sized textual posts (and I’m a self-confessed offender on this!).  Try these on for size:

  • Kyte adds a Twitter-like conversation to video.  Can I add a Twitter conversation to a blog post?  The idea is to encourage real-time back and forth with readers.  Perhaps it needs to be scheduled to make sure the blogger will be online at the appointed hour.  This is all stuff that could be done as a widget that’s inserted into the post to support doing this.
  • Let’s publish some video sometimes, and try to work photos in often. 

Second, there are lots of Widget possibilities that encourage collaboration:

  • Polls encourage people to express themselves.
  • How about being able to switch from a chronological view of the blog to a “votes” view.  People can vote on every entry the way they do on something like Dell’s IdeaStorm.  Is it still a blog if you do that?  Who cares!  Haven’t wanted to go to some new blog you discover and see all the older articles that were really good?  Isn’t this a way of giving them new life?
  • How about IdeaStorming new blog article ideas with your audience?  Heck, run a contest for the idea you like the best.  Wouldn’t you love to be able to convince Scoble or TechCrunch to jump on your idea, research, and write about it?

While we’re on the subject of contests, I think they’re a wonderful way to generate content for corporate blogs.  Go to your employees and ask them to submit articles to the corporate blog.  Let them guest author on the CEO’s blog and give prizes for the articles selected.  You may want to guide them with a list of appropriate themes, but remember, blogging is not so much about just staying on message, its about giving valuable content and encouraging collaboration.  Wouldn’t you like an occasional guest blogger on Jonathan Schwartz’s blog?  There are a ton of really bright people inside Sun, let them share the soapbox once a month.

What are your ideas for injecting more collaboration into blogging? 

Related Articles:

Noah Brier sees improved community features as the next logical step for blogging.

Posted in saas | 5 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 324 other followers

%d bloggers like this: