Skip to main content

Layered abstractions and Russian dolls

The layering of abstractions has served us well, but it's now generating the sorts of complexity it was designed to solve. Time for a re-think? Anyone who's built a large piece of software knows that much of the effort is in managing the complexity of the project: which other software a piece of code relies on, how to keep the various aspects separate, how to manage changes and upgrades, and so on. This isn't something that's got easier over time: it has for a given code size and style, as we've understood build processes and dependency management better; but the code sizes have relentlessly increased to compensate for our improved understanding; and modern practices don't make life any easier. Downloaded code, dynamic modules and classes, client-server and the like all generate their own intrinsic complexity. One of the biggest sources of complexity is the use of multiple applications, especially in enterprise systems. A typical e-commerce system, for example, will make use of a web server to present pages (which themselves might contain embedded JavaScript for client-side processing), a database to track orders and inventory, a procurement system to fulfil orders, and possibly a supply-chain management system to order new inventory. That's the application. Then there'll be the operating system, a logging facility, a facilities management system, and a load of administrative tools and scripts. And the operating system may itself be virtualised and running as a guest within another, host operating system and hypervisor, which needs its own toolset. The interactions between these tools can be mind-boggling. Someone once asked: who knows how to work the Apache web server? It sounds like a simple question -- any competent web manager? the main developers? -- but the sting in the tail is that Apache is very configurable: so configurable, in fact, that it's pretty much impossible to work out what a given combination of options will do (or, conversely, what combination of options to use to achieve a given effect). The interactions are just too complicated, and the web abounds with examples where interactions between (for example) the thread pool size, the operating system block size, and the Java virtual machine parameters conspire to crash a system that looks like it should be working fine. If you can't work one server properly -- one component of the system -- what hope is there to get a complete system humming along? Al Dearle and I have been talking about this for a while. The basic issue seems to be an interaction between decomposition and dependency. In other words, the complexity comes at the "seams" between the various sub-systems, and is magnified the more configurable the components on either side of the seam are. This is important, because systems are becoming more finely decomposed: the move to component software, software-as-a-service and the like all increase the number of seams. Al's image of this is that modern systems are like Russian dolls, where each supposedly independent component contains more components that influence the size and complexity of the component containing them. You can only simplify any individual piece so far, because it depends on so many other pieces. Actually a lot of the seams are now unnecessary anyway. Going back to the e-commerce example, the operating system goes to great pains to provide a process abstraction to keep the components separate -- to stop faults in the database affecting the web server, for example. Historically this made perfect sense and prevented a single faulty process in a time-sharing system affecting the processes of other users. Nowadays, however, it makes considerably less sense, for a number of reasons. Firstly, all the components are owned by a single nominal user (although there are still good reasons for separating the root user from the application user), so the security concerns are less pronounced. Secondly, all the components depend on each other, so a crash in the database will effectively terminate the web server anyway. (We're simplifying, but you get the idea.) Finally, there's a good chance that the web server, database and so on are each running in their own virtual machine, so there's only one "real" process per machine (plus all the supporting processes). The operating system is offering protection that isn't needed, because it's being provided (again) by the hypervisor running the virtual machines and perhaps (again) by the host operating system(s) involved. We also tend to build very flexible components (like Apache), which can deal with multiple simultaneous connections, keep users separate, allow modules to be loaded and unloaded dynamically -- behave like small operating systems, in other words, replicating the OS functionality again at application level. This is despite the fact that, in enterprise configurations, you'll probably know in advance the modules to be loaded and have a single user (or small user population) and fixed set of interactions: the flexibility makes the component more complex for no net gain during operation. Although it might simplify configuration and evolution slightly, there are often other mechanisms for this: in a cloud environment one can spin-up a replacement system in an evolved state and then swap the set of VMs over cleanly. It's easy to think that this makes no difference for modern machines, but that's probably not the case. All these layers still need to be resourced; more importantly, they still need to be managed, maintained and secured, which take time to do well -- with a result that they typically get done badly (if at all). Can we do anything about it? One thought is that the decomposition that makes thinking about systems and programming easier makes executing those systems more complex and fragile. In many cases, once the system is configured appropriately, flexibility becomes an enemy: it'll often be too complicated to re-configure or optimise in a live environment anyway. There may be a reason to have Russian dolls when designing a system, but once designed it's better to make each doll solid to remove the possibility of then opening-up and falling apart. So it's not decomposition that's the issue, it's decomposition manifested at run-time. When we add new abstractions to systems, we typically add them in the form of components or libraries that can be called from other components. These components are often general, with lots of parameters and working with multiple clients -- sound familiar? This is all good for the component-writer, as it lets the same code be re-used: but it bloats each system that uses the component, adding complexity and interactions. So one thought for tackling complexity is to change where decomposition manifests itself. If instead of placing new functions in the run-time system, we placed it into the compiler used to build the run-time, we could use compilation techniques to optimise-out the unnecessary functionality so that what results is optimised for the configuration that it's actually being placed in, rather than being general enough to represent any configuration. There's substantial work on these ideas in the fields of staged compilation and partial evaluation (for example MetaOCaml, Template Haskell, Flask and the like): the flexibility is manifested at compile-time as compile-time abstractions, that in the course of compilation are removed and replaced with inflexible -- but more efficient and potentially more dependable -- specialised code. Think taking the source code for Linux, Apache and MySQL, accelerating them together at high speed, and getting out a single program that'd run on a bare machine, had nothing it didn't actually need, and had all the options for the various (conceptual) sub-systems set correctly to work together. Don't believe it's possible? Neither do I. There's too much code and especially too much legacy code for this to work at enterprise (or even desktop) level. However, for embedded systems and sensor networks it's a different story. For these systems, every extra abstraction that makes the programmer's life easier is a menace if it increases the code size hitting the metal: there just isn't the memory. But there also isn't the legacy code base, and there is a crying need for better abstractions. So an approach to the Russian dolls that moves the abstractions out of the run-time and into the languages and compilers might work, and might considerably improve the robustness and ease of use for many systems we need to develop. It also works well with modern language technology, and with other trends like ever-more-specialised middleware that remove bloat and overhead at the cost of generality. Keeping the former and the latter seems like a worthwhile goal.

When not to return a library book

Beware of books bearing illnesses. I recently read Hindenburg: the wooden titan, John Wheeler-Bennet's biography of the presidency of the man who ushered Hitler into power. It's an interesting book, written just after Hindenburg's death and so before the second world war and the realisation of exactly what had been brought forth. You can also find contemporary book reviews online which are likewise fascinating in what they don't know. It really brings home AJP Taylor's comment (in The struggle for mastery in Europe) that that hardest thing about the study of history to to remember that events now long in the past once lay in the future. What was unexpectedly fascinating was the bookplate in the front of the volume I borrowed from the university library, which was published in 1937. (I think it's a first edition.) The top half is fairly standard, but the bottom part describes something you wouldn't expect to find in a library book: "A person shall not return any public library book which he knows to have been exposed to infection from a notifiable disease ... [including] smallpox, cholera, diphtheria, ..." And so on. The list of diseases includes things against which we are now routinely vaccinated (diphtheria, tuberculosis), those we haven't seen in the UK in my lifetime (typhus), and those most people would now brush-off with a course of antibiotics (flu, pneumonia). Except, of course, that at the time there was far less vaccination and no antibiotics at all, since mass production didn't start until the 1940's. So as well as being an observation of a period in history from a point in time from which its significance was only poorly understood, this book is a time capsule from a period when diseases were a lot bigger and broader threat than they are now.

The post-modernisation of programming languages

Do we now have some post-modern programming languages? Programming languages change all the time. It's never been the case that one could learn just one and then use it: programming is too rich and complicated for that to be a solution for the majority of programmers. However, we seem to be seeing a change in what we think of as programming languages and how they're used to build larger software systems. At the risk of gross over-simplification (and ridicule), let's divide languages into various "eras". In the beginning, the early languages were built around some explicit notion of the machine on which they were running. Assemblers were devoted to a single machine code. Higher-level languages like Forth abstracted away from the machine code but still exposed the guts of the machine, and in particular its data bus width and memory access mechanisms. The next step in abstraction -- to pre-modernism -- was to provide a machine that was still recognisably a Von Neumann system but that hid the details of the underlying machine code and architecture. C and Pascal fall into this category: their operations and abstractions are still largely those of the physical machine, but at a sufficient remove to achieve portability and a usable programming model. But we still have a view of the machine's underlying limitations. Modern languages like Perl, Java and Haskell offer a model of computation, and especially a model of memory, that's considerably removed from the underlying machine and enormously simplifies the programmer's task, at least for "suitable" programs. (Interestingly enough languages don't fall into eras in the sequence of their design or popularity. Lisp considerably pre-dates C but is definitely modern using the terminology above.) What's perhaps not obvious is that all these languages share a common design philosophy. They are all based on a small set of primitive notions combined with some ways of combining those notions, to get compound abstractions that resemble their primitive underpinnings. In C we get a small number of base types, some compound structs and unions, and pointers, and most interesting compound structures one builds will contain pointers all over the place. In Java we can build classes that create and combine objects; in Haskell we get lists and higher-order functions as the basic building blocks. The point is that the basic concepts are general, as are the composition operators, and the larger capabilities are in some sense a consequence of these initial choices and characterise the programs it's easy (or hard) to write. This philosophy makes perfect sense, of course, which explains why it's so widespread. Programmers have to get to grips with a small set of basic concepts and composition methods. Everything else flows from this, and the extensions to the language -- libraries and the like -- will be the same as code one could write oneself: they're time-savers, but not fundamentally different from your own code. As a consequence the basic concepts have to be well-chosen to support the range of programs people will want to write. Some will be easier to write than others -- that's what makes the choice of language more than simply a matter of fashion -- but there's an acceptance that pretty much anything is writeable. However, we're now seeing languages emerge that aren't like this. Instead they're very much targeted at specific domains and problem sets. We've always had domain-specific languages (DSLs), of course, but the current crop seem to me to be rather different. They've abandoned the design philosophy of a small, basic set of abstractions and powerful composition, in favour of large basic abstractions and limited composition and extension. This significantly changes development, and the balance of power between the language designer and the programmer. XSLT is the language that's brought this most to mind, and especially the transition from XSLT 1.0 to XSLT 2.0. First we have to defend the notion that XSLT is a programming language. I first thought of it as just a transformation system for documents, and so it is: but it's targeted at such a general set of applications that it takes on the flavour of a DSL. It's essentially a functional language with pattern-matching and recursion, whose main data structure is the XML document. It lets one write programs that are driven by these documents: another way to look at it is that it gives executable semantics to a set of XML tags, allowing them to perform computation. XSLT's design doesn't start with a small set of primitives. Instead it provides a collection of tags that can be used to perform the common document operations. If you want to number a list, for example, XSLT contains tags that support the common numbering formats: numbers, letters with brackets, and so forth. What?, you wanted something else? Well, you can probably do it, but it'll be indescribably hard because you'll be writing a program in a language for which full computation is an afterthought. Instead, XSLT 2.0 adds new numbering formats that you can select using a helpful set of tags, and this is done within the XSLT processor, not using the XSLT processor. Meaningful extension requires the intervention of the language designer. This is a major philosophical change, a post-modern approach to language design. Don't focus on getting the core abstractions right: instead, build a system -- any system -- that demonstrates initial functionality, and then extend it with new features to meet new demands regardless of how they fit. There's no overarching conceptual scheme to the system. Instead it's judged purely on the functions it provides, regardless of how they fit together. (Larry Wall described Perl as post-modern, but I don't think he meant quite what I'm getting at here -- although there are similarities.) Post-modernism works for three reasons. Firstly, no-one is expecting to write very large amounts of code in these systems. They're intended for sub-systems, not for systems themselves, and so the need for generality is limited. Secondly, there's a premium on speed of development rather than on maintenance, which in turn puts a premium on getting some result out quickly rather than on ensuring that, in the future, we can get any result out we need. The sub-systems are intended to change on a short timescale, so maintenance and extensibility is perceived to be less of an issue than agility. Thirdly, a lot of these languages target people who aren't programmers -- or at least don't think of themselves as programmers -- who are focused on something other than the code. Post-modernism isn't wrong, and appears in middleware too. It's also important to realise the benefits: it makes it easier to put together larger systems, and increases the ambition of projects one can undertake compared to having to build so much from scratch, But it may be misguided. The people who spent the 1960's writing all the COBOL code that's still running today never thought it'd still be being maintained -- and the consequences of that lead to code that's now impossible to change. Simple solutions have a tendency to grow into more complex ones -- "just one more feature and we're done" -- which can push the costs of inappropriate choices out along the project lifecycle. More importantly, for researchers post-modernism is a seductive siren call that lets you step away from tackling some difficult choices. Instead of picking a small set of concepts, choose any set and then extend them, in any way, to build up your system. I think this obscures some insights one might otherwise gain from simplifying your set of concepts. I think it's also worth remembering that increasing the size and number of abstractions isn't without cost: not for the computer or the compiler, but for the programmer. The more programmers have to remember, and in particular the less well the things to be remembered fit together as a conceptual whole, the harder it is to use the system to its fullest advantage. People instead stay with small sub-sets of the language or -- worse -- settle for programs producing results that aren't those they want, just to stay within their language comfort zone. This sort of simplification is a step backwards, and we need to be careful of the consequences.