Technology always advances, and in most areas the rate of change is also increasing all the time. But there are some areas where technological changes either happen only slowly, or even go into reverse. Not something we’re used to in computer science, but it’s a feature of sensor network programming: what are the challenges that technology won’t solve for us? Moore’s law has been a defining feature of computers for the past decades. By and large computing power has doubled every 18 months at constant price; alternatively, the cost of a unit of computing power has halved in the same period. The effects of this on user experience have been plain to see. Within computer science, Moore’s law has had an effect on research directions too. In starting on a PhD a student can work on a problem that’d at the edge of the performance envelope of whatever class of machine she is targeting — cellphone, laptop, desktop or server — secure in the knowledge that, but the time she’s coming to an end, the power available to that machine class will have quadrupled. This doesn’t open-up every problem, of course — a four-times speed-up on an NP-hard search problem might still leave it largely intractable — but in fields such as middleware, software tools, language design and the like, it’s enough to overcome many issues. It’s therefore something of a shock to come to sensor networks and similar systems, because I suspect these systems aren’t subject to Moore’s law in the usual way. In some ways, the situation on such small systems is actually better than in desktops and enterprise computing. At the higher end, we’re already hitting at least the suspicion that the performance increases in individual cores will soon start to flatten out. Multicore processors allow us to keep increasing performance, but at the cost of vastly complicating the sorts of programming needed in order to keep all those cores occupied. Sensor motes are still single-core and typically aren’t using state-of-the-art processes at that, so there’s still plenty of room for manoeuvre. But it’s easy to forget that while the cash cost of a unit of processing power has decreased, the power cost of that unit hasn’t decreased by nearly as much (and may actually have increased). Those twice-as-powerful chips eighteen months on typically burn significantly more power than their predecessors. You only have to look at the size of heatsinks on chips to realise that there’s a lot of heat being dissipated. So for a sensor network, which is using a battery or scavenging for power,  increasing the processor power will almost certainly decrease lifetime, and that’s not a trade-off designers will accept. Battery, scavenging and renewable power sources like solar cells aren’t subject to Moore’s law: their growth curves are those of physics and traditional engineering, not those of IT systems. Ten years ago my cellphone went for three days without a charge; my new HTC Hero lasts no more than two days, even if I turn off the data services and wifi. The extra compute cost has a severe power cost. In many sensor applications, the trade-off will actually be in reverse. Given the choice, a designer might opt for two older, less capable but less power-hungry processors over one more powerful but more hungry. Two motes can provide more coverage, or more robustness, or both. But this exposes a real programming challenge, since it implies that we’re going to have to get used to building modern, open, adaptive software on machines whose capabilities are similar to those of a mid-1980’s vintage home computer — and which might in fact even decrease over time, since the driving forces are pushing for coverage, lifetime and redundant replication. The performance of a network in aggregate might still increase, of course, but that still means that we have to extract extra performance from co-ordinating distributed processors rather than from improving individual nodes. The history of distributed parallel processing should warn us not to be sanguine about that prospect. Actually, though, the challenge will do us good. Modern machines encourage sloppy over-engineering and over-generalisation — building frameworks for situations that we anticipate but which might never occur. Targeting small machines will change this, and instead encourage us to build software that’s fit for immediate purpose, and that’s build to be evolved and extended over time alongside changing requirements and constraints. This building evolution into the core of the system will make for better engineering in the long run.