Where has the “middle” of middleware gone?
This week I attended a couple of the workshops at the Middleware conference, the main venue for the research community working on systems integration middleware. In particular I was an invited speaker at the Future of Middleware (FoME) workshop, which attracted a great bunch of people with views on where the field is going.
Listening to all the talks, one of the things that jumped out was the diversity of concerns people were expressing. Those working at enterprise level were concerned with scaling-out systems to millions of users using cloud computing; at the other extreme were sensor networks people like me, worried about programming low-powered sensor motes with at least a semblance of programming language support and software engineering process. That a group of people with this spread of interests can find useful things to talk about together says a lot about how broad a term middleware actually is.
But if we compare this to years past, it was also clear that the concerns asymmetrically affected the different groups. There were very few issues that really commanded broad interest. This led me to wonder: where has the “middle” gone from “middleware”?
In the 1990s, middleware people (including me) were working with CORBA and the like. These systems were intended for broad application in integrating client and server components into single systems. In CORBA’s case this involved designing object-oriented domain models that could then be implemented using any chosen programming language and support interactions seamlessly, regardless of implementation or distribution. CORBA provided (and indeed provides) a lot of supporting infrastructure, including dedicated wire protocols, a shared type system and object model, higher-level services, binding, call models and other groovy toys. It achieved enormous penetration into markets that value long-term interoperability and evolution, such as financial services. It also influenced a whole range of subsequent developments, including web services and service-oriented architectures that are, to some extent, defined by their similarity to, or differences with, CORBA. (For a discussion of these differences see Baker and Dobson. Comparing service-oriented and distributed object architectures. LNCS 3760. 2005.)
As is pretty much apparent from this, CORBA sits resolutely in the middle of middleware. It is intended to integrate disparate systems and allow them to work together, and evolve somewhat separately, while managing the complexity of the overall architecture. It tries to avoid the problem, identified by Bertrand Meyer, that large systems exhibit non-linear behaviour in which the cost of making any change to them is proportional, not to the size of the change being made, but to the size of the system being changed. It doesn’t completely succeed in this, of course — no system does — but it provides a common centre around which to organise largely independent components.
Another way of looking at CORBA is less flattering: that it was inherently compromised by conflicting, and in large part irreconcilable, goals. It was reasonably performant, but by no stretch of the imagination high-performance given the overheads of a complex and general-purpose wire format. It was reasonably portable, as long as one accepted the limits imposed by the single type system: no first-class functions or mobile code, for example. It was reasonably easy to port and to support new languages, but every new language binding did require considerable ingenuity both in terms of technical design and standardisation of interfaces.
Sitting in the middle, in other words, was tricky and uncomfortable.
The causes of, and justifications for, these compromises aren’t hard to find: what else can one do if one is trying to support the whole range of applications? Each piece of middleware sits at a particular point in the design space, trading-off performance against generality for example, or advanced typing against making bindings awkward or impossible for some languages.
And it’s this generality that seemed to be missing from discussions of the future of middleware: no-one intends to support this range any more. Instead we’re seeing a division of the design space in which different application communities focus on one or two design dimensions that are undoubtedly the most important — and largely forget the rest. For Twitter, for example, the main design goal is lightweight interaction at clients so that Twitter clients are easy to writ. They have no concern whatever with typing or reliability: if tweets are lost, who cares? For the web services community — perhaps closest to the “spirit of the middle” — the issues are extensibility and use of standards, with no concern for protocols, performance or end-point complexity. It’s fairly easy to see that these design issues are simply too diverse to be spanned by a single middleware platform.
I don’t think this necessarily spells the death of integrating middleware — and that’s just as well, given that we still have to integrate these systems despite their increasing heterogeneity. What it does do, though, is change the locus of innovation away from ever-larger, more complex and more general platforms towards more specialised platforms that can integrate as far as needed — and no further, so as not to over-burden applications or developers. Several speakers talked about using component-based approaches to build platforms as well as applications. In our talk we discussed similar ideas, removing the a priori assumptions underlying middleware platforms and focusing instead on how to optimise what hits the metal. (Dearle and Dobson. Mission-oriented middleware for sensor-driven scientific systems. Journal of Internet Services and Applications. 2011.) This will give rise to a whole range of further challenges — how do we guarantee the right features are available? How do we add (and remove) features on the fly? How do we find the features we need? — that are radically different from those encountered for CORBA and similar systems. But the result will (hopefully) be to improve our ability to create, manage and evolve ever more sophisticated combinations of applications and services, and make it easier to roll-out and scale-out the next generation of applications and scientific experiments.