New Paradigms for Developer Thinking: Spatial Synthesis
Posted by Bob Warfield on October 11, 2007
I came across two posts recently that are completely unrelated but that mysteriously bonded inseperably in my head. Sounds painful, I know, but it’s how my mind works and it’s why I read a ton of material and relish finding different frameworks for thinking about problems. When you can plug two together and they work, it feels like a new discovery. The more formal term for this is “isomorphism”, although isomorphisms imply total overlap at some level, and I’m very interested even in partial overlap that can lead to an intuitive leap. More on isomorphisms in Process Perfection.
Getting back to the point, the first set of ideas that I came across involved thinking of programming as synthesis. The idea came up in an interview with Dan McWeeney. I like the post by Doug McCune best, but be sure to watch the video that was done by the Red Monk guys.
What is programming as synthesis? The reason I like McCune’s write up is he ignores all the other stuff in these other blogs and jumps to the heart of the issue:
There’s a whole new group of people that’s being created right now. Which are people that are really synthesizing things. And they’re programmers at heart but they’ve realized that there’s way more smart people in the open source community that they can tap and now build these things together. So it’s like mashups for programmers, where you’re taking all these little bits and you’re synthesizing all this crap together to create this whole new thing.
That’s very cool, now what’s the killer example that slams it home:
They wanted true 3D physics, and there isn’t anything like that available for Actionscript yet. So what to do? They got a non-Actionscript open source (I think) physics engine written in C (or C++, I think?). Then they run the physics simulation and pipe coordinates that represent all the objects and movement in 3D over a local socket connection that gets read by their Flex app. Fucking a. This is the kind of thing that’s awesome. Someone says “But we can’t do 3D physics in Actionscript” and they just say “Well fuck it then, we’ll do it anyway.”
So they’re building a 3D game in Flex, and they hook up a 3D physics component by shooting it coordinates via a socket connection. Why do I like this so much? Because it resonates really well with some ideas I’ve been preaching and some others I’ve been thinking long and hard about:
- They’re using multiple languages, and they’re using them for what they’re good for instead of one-language-fits-all. Flex is a bunch of the app and C++ is the physics engine. Perfect!
- The connection between these components is super simple. It had to be. What these guys will tell you is they aren’t smart enough to do the hard stuff. They need easy connections so others can do the hard stuff. And it worked without all the crazy middleware ESB mumbo jumbo needed. It worked well enough for an interactive Wii game, for Heaven’s sake. It isn’t RESTful, but that’s okay, this is working for these guys. The point is it was easy.
- Their concept of synthesis is component software. It’s a Holy Grail many have been after for a long time. It sure sounds like their version is simpler and happier than massive OOP libraries and frameworks or complex middleware. For once, here are some guys saying they did something that radically simplified their project.
Note that this isn’t simply one case of a game demo, there are also major pieces of software showing signs of this programming by synthesis approach. Take a look at Workday and SAP ByDesign’s model-based approach to creating applications. It’s a very similar concept.
OK, on to the second article that got wedged in my head and attached itself to this one. Allan McInnes over at Lambda the Ultimate wrote a doozy of a post in terms of starting my gears turning. He riffs on an IEEE article that says: It’s Time to Stop Calling Circuits “Hardware”. The IEEE article links the ideas that code requires sequential thinking but circuit design is spatial. The gist of that gets translated to the idea that traditional programming languages (your Java’s, C++’s, PHP’s, and such) are focused on sequential (he calls it “temporal”) computing, but that maybe the future of programming languages will require some form of spatial orientation.
Now the article is largely focused on programming hardware, so I jumped the gun quite a bit by leaping at the idea that parallel programming is inherently spatial, not sequential. Was that a mistake? I don’t think so. One of the big keys for many parallel algorithms is figuring out how to partition your problem into pieces that can be processed in parallel. The choice of how to go about that is often the only important key to the algorithm. But isn’t that kind of partitioning thinking fundamentally a spatial function in our minds? It sure feels like it to me. We speak of “mapping” frequently in these kinds of algorithms. Could it be more spatial?
Now what do these two have to do with one another? It seems to me that the component software synthesis vision is also an inherently spatial “wiring up” exercise. A sufficiently simple paradigm for wiring up the components let’s us quit worrying about uber complex framework mechanisms and focus on laying out our wiring as it needs to be. Again, it is an inherently spatial process. Don’t you prefer block diagrams for this kind of thing? I’m not talking about flowcharts, which are pictures of sequential thinking. I’m talking more of dataflow, and data structure diagrams. It’s spatial thinking.
I wrote recently about the pleasures of RESTful SOA architectures versus gnarly ESB’s. I wonder if a lot of the difference might not come down to sequential ESB thinking versus spatial RESTful thinking. This third thread hasn’t quite bonded to the other two, but it feels like it wants too. Statefullness, it seems, may be a manifestation of sequential thinking. It collapses dimensions to a point. If we have it, it ought to be implicit, as in the location on the spatial map where we are at the moment. That map tells us from that location where we may go next. That’s RESTful, and it seems to me highly spatial. Don’t we think of the web itself in abstractly spatial terms? We “go” to URL’s. We “follow” links. It is a journey through space.
If all that really hangs together, why doesn’t a visual representation (as opposed to a textual representation) make more sense for synthesis, parallelism, scaling, and SOA?