Sensor networks are all about uncertainty: if the sensor says it’s 20°C out there, then it might be 20°C plus-or-minus half a degree or so (limited precision); or it might be some different temperature, and the sensor’s just reported a duff value for some reason (limited accuracy). By contrast, computers most definitely *aren’t* about uncertainty, a fact enshrined in the famous maxim “garbage in, garbage out”. What does this mean for our ability to build really large, robust and flexible sensor networks?

All the really foundational models of computing — λ calculus, Turing machines and the like — pretty much reflect this notion that input is correct in some sense, and if it’s wrong then that’s an error to be corrected outside the computational system. That seems to mean that the computational system can’t itself either tolerate or express the notions of limited certainty — precision and accuracy — that lie at the core of sensor networks (and a lot of other modern systems, or course). That suggests to me that there might be a problem at the heart of computer science as we currently formulate it: it isn’t a realistic model of the world we’re trying to compute over.

In some ways this is nether surprising nor threatening. Any mathematical or computational model is only a simplified abstraction of the real world, for which we have to make often staggeringly bold simplifications if we’re to get anywhere. We should however always be prepared to challenge the *validity* and *necessity* of these simplifications, and that’s what I’d like to do here.

As far as validity is concerned, the simplification is quite patently *invalid* when it comes to any system that operates with real-world data: some of it is *bound* to be “wrong” in some sense. This isn’t the same as being tolerant of mistakes, such as when someone presses the delete key by mistake: that’s a action that certainly happened and to which the system responded correctly, albeit “wrongly” from the user’s perspective. Interesting problem, but different: we’re talking here about responding to inherently erroneous input — the delete key seeming to press itself, if you like.

Necessity, then: is it necessary to handle computation in this way? Clearly not: we can easily conjecture a computational model that’s more tolerant of input with limited certainty.

Consider precision first. If the input is only known to a limited precision, then we don’t want that error margin to cause enormous errors. If we have a function \(f\), then we want \(f\) to exhibit a tolerance of imprecision such that \(\delta x < tol_x \Rightarrow \left | f(x + \delta x) – f(x) \right | < s \left | \delta x \right|\) for some scaling factor \(s < 1\). \(f\) doesn’t cause errors to blow-up in unpredictable ways. A lot of functions behave in exactly this way: for example, in a sliding-window average function \(f_w(\overline{x}, x) = \frac{x + \overline{x}(w – 1)}{w}\) for an average \(\overline{x}\) computed from \(w\) recent observations, we have that \(s = \frac{1}{w}\). Small errors therefore perturb the result significantly less than the input is perturbed. If the errors are uniformly distributed, the function should converge on the “true” value.

Conversely, a large, accurate new observation will perturb the average only slowly, so large step-changes will be detected only slowly. It’s hard to distinguish such a change when it first happens from an inaccurate reading. There are various ways of dealing with this, such as using a weighted sliding window with non-linear weighting.

This is a rather topological idea. Rather than simply mapping points in an input space (such as temperature) to an output space (average temperature over the past few minutes), we’re also requiring that the mapping take elements close in the input space to elements close in the result space: we require that it be a *contraction mapping*. Building systems from contraction mappings, perhaps combined with contraction-preserving operators, yields systems that are robust to small errors in precision from their sensors.

Of course not all systems *are* actually like this, and in many cases we *want* rapid changes to be detected quickly and/or propagated. The point, perhaps, is that this is a *choice we should make* rather than a *consequence* of choosing a particular model of computation. There might actually be a model of computation lurking about here, in which we define functions coupled with a model of how their input and output errors should behave. At the very least, this yields systems in which we can predict the consequences of errors and imprecisions, which is a major challenge to deal with at present.