If you try to do everything, you always end up doing nothing. Which is why Gray’s laws suggest searching for the twenty “big questions” in a field and then focusing-in the first five as the ones that’ll generate the biggest return on the effort invested. So what are the five biggest open issues in programming for sensorised systems? Of course we should start with a big fat disclaimer: these are my five biggest open issues, which probably don’t relate well to anyone else’s — but that’s what blogs are for, right? :-) So here goes: five questions, with an associated suggestion for a research programme. 1. Programming with uncertainty. This is definitely the one I feel is most important. I’ve mentioned before that there’s a mismatch between traditional computer science and what we have to deal with for sensor systems: the input is uncertain and often of very poor quality, but the output behaviour has to be a “best attempt” based on what’s available and has to be robust against small perturbations due to noise and the like. But uncertainty is something that computers (and computer scientists) are quite bad at handling, so there’s a major change that has to happen. To deal with this we need to re-think the programming models we use, and the ways in which we express behaviour. For example we could look at how programs respond to perturbation, or design languages in which perturbations have a small impact by design. A calculus of stable functions might be a good starting-point, where perturbation tends to die-out over time and space, but long-term changes are propagated. We might also look at how to program more effectively with Bayesian statistics, or how to program with machine leaning: turn things that are currently either libraries or applications into core constructs from which to build programs. 2. Modeling adaptive systems as a whole. We’ve had a huge problem getting systems to behave according to specification: now we propose that they adapt in response to changing circumstances. Clearly the space of possible stimuli and responses are too large for exhaustive testing, or for formal model-checking, so correctness becomes a major issue. What we’re really interested in, of course, isn’t so much specifying what happens as much as how what happens changes over time and with context. Holistic models are common in physics but uncommon in computer science, where more discrete approaches (like model checking) have been more popular. It’s easy to see why this is the case, but a small-scale, pointwise formal method doesn’t feel appropriate to the scale of the problem. Reasoning about a system as a whole means re-thinking how we express both specifications and programs. But the difference is target is important too: we don’t need to capture all the detail of a program’s behaviour, just those aspects that relate to properties like stability, response time, accuracy and the like — a macro method for reasoning about macro properties, not something that gets lost in the details. Dynamical systems might be a good model, at least at a conceptual level, with adaptation being seen as a “trajectory” through the “space” of acceptable parameter values. At the very least this makes adaptation an object of study in its own right, rather than being something that happens within another, less well-targeted model. 3. Modeling complex space- and time-dependent behaviours. Reasoning systems and classifiers generally only deal with instants: things that are decided by the state of the system now, or as what immediately follows from now. In many cases what happens is far richer than this, and one can make predictions (or at least calculate probabilities) about the future based on classifying a person or entity as being engaged in a particular process. In pervasive computing this manifests itself as the ways in which people move around a space, the services they access preferentially in some locations rather than others, and so forth. These behaviours are closely tied-up with the way people move and the way their days progress, as it were: complex spatio-temporal processes giving rise to complex behaviours. The complexities come from how we divide-up people’s actions, and how the possibilities branch to give a huge combinatorial range of possibilities — not all of which are equally likely, and so can be leveraged. A first step at addressing this would be to look at how we represent real-world spatio-temporal processes with computers. Of course we represent such processes all the time as programs, but (linking back to point 1 above) the uncertainties involved are such that we need to think about these things in new ways. We have a probabilistic definition of the potential future evolutions, against which we need to be able to express behaviours synthesising the “best guesses” we can make and simultaneously use the data we actually observe to validate or refute our predictions and refine our models. The link between programming and the modelingthat underlies it looks surprisingly intimate. 4. Rich representations of linked data. Sensors generate a lot of data. Much of it has long-term value, if only for verification and later re-study. Keeping track of all this data is going to become a major challenge. It’s not something that the scientists for whom it’s collected are generally very good at — and why should they be, given that their interests are in the science and not in data management? But the data has to be kept, has to be retrievable, and has to be associated with enough metadata to make its properly and validly interpretable in the future. Sensor mark-up languages like SensorML are a first step, but only a first step. There’s also the issue of the methodology by which the data was collected, and especially (returning to point 2) were the behaviours of the sensors consistent with gaining a valid view of the phenomena of interest? That means linking data to process descriptions, or to code, so that we can track-back through the provenance to ensure integrity. Then we can start applying reasoners to classify and derive information automatically from the data, secure in the knowledge that we have an audit trail for the science. 5. Making it easier to build domain-specific languages for real. A lot has been said about DSLs, much of it negative: if someone’s learned C (or Java, or Fortran, or Matlab, or Perl, or…) they won’t want to then learn something else just to work in a particular domain. This argument holds that it’s therefore more appropriate to provide advanced functions as libraries accessed from a common host language (or a range of languages). The counter-argument is that libraries only work around the edges of a language and can’t provide the strength of optimisation, type-checking and new constructs needed. I suspect that there’s truth on both sides, and I also suspect that many power users would gladly learn a new language if it really matched their domain and really gave them leverage. Building DSLs is too complicated, though, especially for real-world systems that need to run with reasonable performance on low-grade hardware. A good starting-point might be a system that would allow libraries to be wrapped-up with language-like features — like Tcl was intended for, but with more generality in terms of language constructs and types. A simpler language-design framework would facilitate work on new languages (as per point 1 above), and would allow us to search for modes of expression closer to the semantic constructs we think are needed (per points 2 and 3): starting from semantics and deriving a language rather than vice versa.