Skip to main content

Posts about sensor networks (old posts, page 1)

Sensor and sense-ability

I'm talking at the BCS Edinburgh branch tonight about sensing and sensor-driven systems.

It has been an old maxim in computing that incorrect inputs can acceptably give rise to unacceptable outputs: "garbage in, garbage out".

This is ceasing to be true, and many classes of systems must behave predictably even in the face of inputs containing substantial garbage -- although researchers delicately use terms like "imprecise" or "unstructured" instead of "garbage".

In this talk we discuss some approaches to managing the problem of imprecise, inaccurate, untimely and partial inputs in the context of pervasive and sensor-driven systems, and suggest that we need to re-think radically the way we build software and represent decision-making in these environments.

Details of the venue are available here.

Call for papers:Software Engineering for Adaptive and Self-Managing Systems

Papers are invited in all aspects of software engineering for adaptive systems, for the SEAMS symposium in Hawaii in May 2011. The deadline is now quite close.

CALL FOR PAPERS

6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2011) (Sponsored by ACM SIGSOFT and IEEE TCSE)

Waikiki, Honolulu, USA 23-24 May 2011

http://2011.seams-symposia.org/

THEME

An increasingly important requirement for a software-based system is the ability to self-manage by adapting itself at run time to handle changing user needs, system intrusions or faults, a changing operational environment, and resource variability. Such a system must configure and reconfigure itself, augment its functionality, continually optimize itself, protect itself, and recover itself, while keeping its complexity hidden from the user.

The topic of self-adaptive and self-managing systems has been studied in a large number of specific areas, including software architectures, fault-tolerant computing, robotics, control systems, programming languages, and biologically-inspired computing.

The objective of this symposium is to bring together researchers and practitioners from many of these diverse areas to engage in stimulating dialogue regarding the fundamental principles, state of the art, and critical challenges of self-adaptive and self-managing systems. Specifically, we intend to focus on the software engineering aspects, including the methods, architectures, algorithms, techniques, and tools that can be used to support dynamic adaptive behavior that includes self-adaptive, self-managing, self-healing, self-optimizing, and self-configuring, and autonomic software.

TOPICS OF INTEREST

We are interested in submissions from both industry and academia on all topics related to this important area. These include, but are not limited to:

  • formal notations for modeling and analyzing software self-adaptation
  • programming language support for self-adaptation
  • reuse support for self-adaptive systems (e.g., patterns, designs, code, etc.)
  • design and architectural support for the self-adaptation of software
  • algorithms for software self-adaptation
  • integration mechanisms for self-adaptive systems
  • evaluation and assurance for self-* systems (e.g., run-time verification)
  • modeling and analysis of adaptive systems (e.g., run-time models, cost-benefit analysis, architectural styles and patterns, requirements)
  • decision-making strategies for self-adaptive and self-organizing systems support for run-time monitoring (for requirements, design, performance, etc.)
  • model problems and exemplars
The following application areas are of particular interest:
  • mobile computing
  • dependable computing
  • autonomous robotics
  • adaptable user interfaces
  • service-oriented systems
  • autonomic computing

PAPER SUBMISSION DETAILS

We are soliciting three types of papers: research papers and experience reports (up to 10 pages, ACM SIG Proceedings Format) and position papers for new ideas (up to 6 pages, ACM SIG Proceedings Format). Research papers should clearly describe the technical contribution and how the work has been validated. Experience reports should describe how an existing technique has been applied to real-world examples, including lessons learned from the experience. New idea papers provide an opportunity to describe novel and promising ideas and/or techniques that might not have been fully validated. All submitted papers will be reviewed by at least three program committee members. Papers must not have been previously published or concurrently submitted elsewhere. The accepted papers will appear in the symposium proceedings that will be published as ACM conference proceedings.

IMPORTANT DATES

Submission deadline: 12th December 2010

Author notification: 15th February 2011

Camera ready copy: 1st March 2011

SYMPOSIUM ORGANIZATION

General Chair: Holger Giese, HPI/Univ. of Potsdam, Germany

Program Chair: Betty H.C. Cheng, Michigan State University, USA

Publicity Chairs: Basil Becker, HPI/Univ. of Potsdam, Germany; Thomas Vogel, HPI/Univ. of Potsdam, Germany

Program Committee:

  • Colin Atkinson University of Mannheim, Germany
  • Robert Baillargeon Panasonic Automotive, USA
  • Luciano Baresi Politecnico di Milano, Italy
  • Nelly Bencomo University of Lancaster, UK
  • Yuriy Brun University of Washington, USA
  • Vinny Cahill Trinity College Dublin, Ireland
  • Shang-Wen Cheng Jet Propulsion Laboratory, USA
  • Simon Dobson University of St. Andrews, UK
  • Gregor Engels University of Paderborn, Germany
  • Cristina Gacek City University, UK
  • David Garlan Carnegie Mellon University, USA
  • Kurt Geihs University of Kassel, Germany
  • Carlo Ghezzi Politecnico di Milano, Italy
  • Svein Hallsteinsen SINTEF, Norway
  • Paola Inverardi University of L'Aquila, Italy
  • Jean-Marc Jezequel IRISA-INRIA, France
  • Gabor Karsai Vanderbilt University, USA
  • Jeff Magee Imperial College London, UK
  • Nenad Medvidovic University of Southern California, USA
  • John Mylopoulos University of Trento, Italy
  • Hausi Müller University of Victoria, BC, Canada
  • Sooyong Park University of Sogang, S. Korea
  • Anna Perini FBK-IRST, Center for Information
  • Technology, Italy
  • Masoud Sadjadi Florida International University, USA
  • Onn Shehory IBM-Haifa Research, Israel
  • Roy Sterritt University of Ulster, UK
  • Danny Weyns Katholieke Universiteit Leuven, Belgium
  • Andrea Zisman City University, UK

Steering Committee:

  • Betty H.C. Cheng Michigan State University, USA
  • Rogério de Lemos University of Kent, UK
  • David Garlan Carnegie Mellon University, USA
  • Holger Giese HPI/Univ. of Potsdam, Germany
  • Marin Litiou York University, Canada
  • Jeff Magee Imperial College London, UK
  • Hausi Müller University of Victoria, Canada
  • Mauro Pezzè University of Lugano, Switzerland, and
  • University of Milan Bicocca, Italy
  • Richard Taylor University of California, Irvine, USA

FURTHER INFORMATION

Symposia-related email should be addressed to seams2011@seams-symposia.org

Forth for sensors?

Most sensor systems are programmed using C: compact and well-known, but low-level and tricky to get right when things get compact and complex. There have been several proposals for alternative languages from across the programming language research spectrum. I haven't heard anyone mention Forth, though, and it's worth considering -- even if only as a target for other languages.

Many people will never have encountered Forth, a language that's enjoyed up-and-down popularity for over four decades without ever hitting the mainstream. Sometimes touted as an alternative to Basic, it even had an early-1980's home computer that used it as the introductory language.

Forth has a number of characteristics that are completely different to the vast majority of modern languages:

  • It uses and explicit data stack and Reverse-Polish notation uniformly throughout the language
  • There's no type system. Everything is represented pretty much using addresses and integers. The programmer is on her own when building complex structures
  • It is a threaded interpreter where every construct in the language is a "word". Composing words together generates new words, but (unlike in an interpreter) the definitions are compiled efficiently, so there's an immediacy to things without crippling performance overheads
  • A standard system usually mixes its "shell" and "compiler" together, so one can define new words interactively which get compiled immediately
  • There's a small kernel of machine-code (or C) routines, but...
  • The compiler itself -- and indeed the vast majority of the system -- can be written in Forth, so you can extend the compiler (and hence the language) with new constructs that have first-class status alongside the built-in words. There's typically almost no overhead of programmer- versus system-defined words, since they're all written in the same (fast) language
  • If you're careful, you can build a cross-compiler that will generate code for a different target system: just port the kernel and the compiler should be re-usable to generate code that'll run on it. (It's not that simple, of course, as I'm finding myself...)
So Forth programs don't look like other languages. There's no real phase distinction between compilation and run-time, since everything's mixed-in together, but that has the advantage that you can write new "compiler" words to make it easier to write your "application" words, all within the same framework and set of capabilities. You don't write applications so much as extend the language itself towards your problem. That in turn means you can view Forth either as low-level -- a glorified assembler -- or very high-level in terms of its ability to define new syntax and semantics.

That probably sounds like a nightmare, but suspend judgment for a while.....

One of the problems that concerns a lot of sensor networks people is the programming level at which we have to deal with systems. Typically we're forced to write C on a per-node basis: program the individual nodes, and try to set it up so that the network behaves, as a whole, in an intended way. This is clearly possible in many cases, and clearly gets way more difficult as things get bigger and more complex, and especially when we want the network to adapt to the phenomena it's sensing, which often requires decisions to be made on a non-local basis.

Writing a new language is a big undertaking, but a substantially smaller undertaking with Forth. It's possible to conceive of new language structures (for example a construct that generates moving averages) and implement it quickly and simply. This might just be syntactic sugar, or might be something rather more ambitious in terms of control flow. Essentially you can extend the syntax and the semantics of Forth so that it "understands", at the same level as the rest of the compiler, the new construct you want to use.

Which is interesting enough, but what makes it more interesting for sensors is the structure of these compiler words. Typically they're what is known as IMMEDIATE words, which means they execute when encountered compiling a word. That may sound odd, but what it means is that the compiler word executes and generates code, rather than being compiled itself. And that means that, when used with a cross-compiler, the new language constructs don't add to the target system's footprint, because their action all happens at compile-time to generate code that's expressed in terms of lower-level words. In core Forth, constructs like IF and LOOP (conditional and counted loops respectively) do exactly this: they compile low-level jumps (the word (BRANCH) and (?BRANCH), which do non-conditional and conditional branches respectively) implementing the higher-level structured-programming abstraction.

A lot of modern languages use virtual machines as targets for their compilers, and a lot of those VMs are stack machines -- Forth, in other words. If we actually use Forth as the VM for a compiler, we have an extensible VM in which we can define new constructs, so we can evolve the VM better to fit the language that targets it. (There are also some very interesting, close parallels between Forth code and the abstract syntax trees used to represent code within compilers, but that's something I need to think about a bit more before I write about it.)

All this is rather speculative, and doesn't really address the core problem of programming a network rather than a device, but it does provide a device-level platform that might be more amenable to language research. I've been experimenting with Forth for this purpose, and have a prototype system -- Attila, an abstract, re-targetable threaded interpreter that's fully cross-compilable -- in the works, but not quite ready to see the light of day. It's taught me a lot about the practicalities of threaded interpreters and cross-compilers. This is a strand of language design that's been almost forgotten, and I think it deserves more of a look.

Things that won't change

Technology always advances, and in most areas the rate of change is also increasing all the time. But there are some areas where technological changes either happen only slowly, or even go into reverse. Not something we're used to in computer science, but it's a feature of sensor network programming: what are the challenges that technology won't solve for us?

Moore's law has been a defining feature of computers for the past decades. By and large computing power has doubled every 18 months at constant price; alternatively, the cost of a unit of computing power has halved in the same period. The effects of this on user experience have been plain to see.

Within computer science, Moore's law has had an effect on research directions too. In starting on a PhD a student can work on a problem that'd at the edge of the performance envelope of whatever class of machine she is targeting -- cellphone, laptop, desktop or server -- secure in the knowledge that, but the time she's coming to an end, the power available to that machine class will have quadrupled. This doesn't open-up every problem, of course -- a four-times speed-up on an NP-hard search problem might still leave it largely intractable -- but in fields such as middleware, software tools, language design and the like, it's enough to overcome many issues.

It's therefore something of a shock to come to sensor networks and similar systems, because I suspect these systems aren't subject to Moore's law in the usual way.

In some ways, the situation on such small systems is actually better than in desktops and enterprise computing. At the higher end, we're already hitting at least the suspicion that the performance increases in individual cores will soon start to flatten out. Multicore processors allow us to keep increasing performance, but at the cost of vastly complicating the sorts of programming needed in order to keep all those cores occupied. Sensor motes are still single-core and typically aren't using state-of-the-art processes at that, so there's still plenty of room for manoeuvre.

But it's easy to forget that while the cash cost of a unit of processing power has decreased, the power cost of that unit hasn't decreased by nearly as much (and may actually have increased). Those twice-as-powerful chips eighteen months on typically burn significantly more power than their predecessors. You only have to look at the size of heatsinks on chips to realise that there's a lot of heat being dissipated.

So for a sensor network, which is using a battery or scavenging for power,  increasing the processor power will almost certainly decrease lifetime, and that's not a trade-off designers will accept. Battery, scavenging and renewable power sources like solar cells aren't subject to Moore's law: their growth curves are those of physics and traditional engineering, not those of IT systems. Ten years ago my cellphone went for three days without a charge; my new HTC Hero lasts no more than two days, even if I turn off the data services and wifi. The extra compute cost has a severe power cost.

In many sensor applications, the trade-off will actually be in reverse. Given the choice, a designer might opt for two older, less capable but less power-hungry processors over one more powerful but more hungry. Two motes can provide more coverage, or more robustness, or both.

But this exposes a real programming challenge, since it implies that we're going to have to get used to building modern, open, adaptive software on machines whose capabilities are similar to those of a mid-1980's vintage home computer -- and which might in fact even decrease over time, since the driving forces are pushing for coverage, lifetime and redundant replication. The performance of a network in aggregate might still increase, of course, but that still means that we have to extract extra performance from co-ordinating distributed processors rather than from improving individual nodes. The history of distributed parallel processing should warn us not to be sanguine about that prospect.

Actually, though, the challenge will do us good. Modern machines encourage sloppy over-engineering and over-generalisation -- building frameworks for situations that we anticipate but which might never occur. Targeting small machines will change this, and instead encourage us to build software that's fit for immediate purpose, and that's build to be evolved and extended over time alongside changing requirements and constraints. This building evolution into the core of the system will make for better engineering in the long run.

Programming with limited certainty

Sensor networks are all about uncertainty: if the sensor says it's 20°C  out there, then it might be 20°C plus-or-minus half a degree or so (limited precision); or it might be some different temperature, and the sensor's just reported a duff value for some reason (limited accuracy). By contrast, computers most definitely aren't about uncertainty, a fact enshrined in the famous maxim "garbage in, garbage out". What does this mean for our ability to build really large, robust and flexible sensor networks?

All the really foundational models of computing -- λ calculus, Turing machines and the like -- pretty much reflect this notion that input is correct in some sense, and if it's wrong then that's an error to be corrected outside the computational system. That seems to mean that the computational system can't itself either tolerate or express the notions of limited certainty -- precision and accuracy -- that lie at the core of sensor networks (and a lot of other modern systems, or course). That suggests to me that there might be a problem at the heart of computer science as we currently formulate it: it isn't a realistic model of the world we're trying to compute over.

In some ways this is nether surprising nor threatening. Any mathematical or computational model is only a simplified abstraction of the real world, for which we have to make often staggeringly bold simplifications if we're to get anywhere. We should however always be prepared to challenge the validity and necessity of these simplifications, and that's what I'd like to do here.

As far as validity is concerned, the simplification is quite patently invalid when it comes to any system that operates with real-world data: some of it is bound to be "wrong" in some sense. This isn't the same as being tolerant of mistakes, such as when someone presses the delete key by mistake: that's a action that certainly happened and to which the system responded correctly, albeit "wrongly" from the user's perspective. Interesting problem, but different: we're talking here about responding to inherently erroneous input -- the delete key seeming to press itself, if you like.

Necessity, then: is it necessary to handle computation in this way? Clearly not: we can easily conjecture a computational model that's more tolerant of input with limited certainty.

Consider precision first. If the input is only known to a limited precision, then we don't want that error margin to cause enormous errors. If we have a function $latex f$, then we want $latex f$ to exhibit a tolerance of imprecision such that $latex \delta x < tol_x \Rightarrow \left | f(x + \delta x) - f(x) \right | < s \left | \delta x \right|$ for some scaling factor $latex s < 1$. $latex f$ doesn't cause errors to blow-up in unpredictable ways. A lot of functions behave in exactly this way: for example, in a sliding-window average function $latex f_w(\overline{x}, x) = \frac{x + \overline{x}(w - 1)}{w}$ for an average $latex \overline{x}$ computed from $latex w$ recent observations, we have that $latex s = \frac{1}{w}$. Small errors therefore perturb the result significantly less than the input is perturbed. If the errors are uniformly distributed, the function should converge on the "true" value.

Conversely, a large, accurate new observation will perturb the average only slowly, so large step-changes will be detected only slowly. It's hard to distinguish such a change when it first happens from an inaccurate reading. There are various ways of dealing with this, such as using a weighted sliding window with non-linear weighting.

This is a rather topological idea. Rather than simply mapping points in an input space (such as temperature) to an output space (average temperature over the past few minutes), we're also requiring that the mapping take elements close in the input space to elements close in the result space: we require that it be a contraction mapping. Building systems from contraction mappings, perhaps combined with contraction-preserving operators, yields systems that are robust to small errors in precision from their sensors.

Of course not all systems are actually like this, and in many cases we want rapid changes to be detected quickly and/or propagated. The point, perhaps, is that this is a choice we should make rather than a consequence of choosing a particular model of computation. There might actually be a model of computation lurking about here, in which we define functions coupled with a model of how their input and output errors should behave. At the very least, this yields systems in which we can predict the consequences of errors and imprecisions, which is a major challenge to deal with at present.