Issuing AT commands

Controlling the XBee requires issuing AT commands. The XBee library has the low-level machinery to do this.

AT commands are the basis for controlling almost all modems, and the XBee is no different. In API mode, AT commands are issued in a similar manner to sending data. The Arduino XBee library has the low-level code needed, which can be wrapped into a slightly easier-to-use form.

The basic approach is to send an AT command request packet and then read a returned packet acknowledging the command. For the moment we’ll stick to “setting” commands, where the AT command takes an integer parameter: the other are needed less frequently. We construct the request packet, send it, read the response, and check that all went well. This isolates the rest of the program from the message exchange, but also hides the exact nature of any error.

#include <XBee.h>

XBee radio;

int atCommand( char *command, uint8_t param ) {
  // send local AT command
  AtCommandRequest req = AtCommandRequest((uint8_t *) command, (uint8_t *) &amp;param, sizeof(uint8_t));
  radio.send(req);

  // receive response frame
  AtCommandResponse res = AtCommandResponse();
  if(radio.readPacket(500)) {                               // read packet from radio
     if(radio.getResponse().getApiId() == AT_RESPONSE) {    // right type?
       radio.getResponse().getAtCommandResponse(res);
       if(res.isOk()) {                                     // not an error?
         return 0;
       }
     }
  }

  // if we get here, return a failure
  return 1;
}

This function can be used to issue the different control codes for the radio. Some parameters can be set using X-CTU when the radio firmware is installed, but commands are sometimes needed at run-time too.

Basic power measurements

Some initial measurements of power consumption.

How much power does Arduino sleep mode save? The simplest way to work this out is to power an Arduino from a battery pack and measure the current being drawn in the different modes. A simple program to demonstrate the different modes is:

  • Normal delay() loop
  • Deep sleep for a period (deep sleep)
  • Flash the LED (awake)
  • Flash the LED differently while sending out radio messages (awake and transmitting)
We perform these tasks repeatedly, keeping them going for 10s each to let the power draw stabilise.

The results are as follows:

Activity Power mode Current
Nothing delay() loop 43mA
Nothing Deep sleep 33mA
Steady LED Deep sleep 34mA
Flashing LED Awake 45mA
Xbee (quiet) Deep sleep 72mA
Xbee (quiet) Awake 85mA
Xbee (transmitting) Awake 87mA

The good news is that SleepySketch makes it very easy to access the deep sleep mode, and to stay in it by default. This is good, as the normal approach of using delay() is quite power-hungry. The bad news is that the “at rest” power consumption of an Arduino even in deep sleep  — the quiescent current being drawn by the voltage regulator and other components on the board, regardless of what the microcontroller is doing — is about 35mA, with an XBee drawing an additional 40mA.There is very little difference in power whether the radio is transmitting or not (although the current being drawn looked more variable when transmitting, suggesting that there’s some variation happening faster than the ammeter’s sample time).

The radio isn’t put to sleep when the Arduino is asleep, which is clearly something that needs to happen: it draws power even when the Arduino is incapable of using it. Something to explore. Potentially more serious is the power being drawn when the Arduino is asleep. A battery pack with 4 x 1500mAH batteries will be drained in about 7 days (6000mAH / 35mA) even with the system asleep all the time.

[UPDATE 1Aug2013: made the table layout a bit clearer.]

Big, or just rich?

The current focus on “big data” may be obscuring something more interesting: it’s often not the pure size of a dataset that’s important. The idea of extracting insight from large bodies of data promises significant advances in science and commerce. Given a large dataset, “big data” techniques cover a number of possible approaches:

  • Look through the data for recurring patterns (data mining)
  • Present a summary of the data to highlight features (analytics)
  • (Less commonly) Identify automatically from the dataset what’s happening in the real world (situation recognition)
There’s a wealth of UK government data available, for example. Making it machine-readable means it can be presented in different ways, for example geographically. The real opportunities seem to come from cross-overs between datasets, though, where they can be mined and manipulated to find relationships that might otherwise remain hidden, for example the effects of crime on house prices. Although the size and availability of datasets clearly makes a difference here — big open data — we might be confusing two issues. In some circumstances we might be better looking for smaller but richer datasets, and for richer connections between them. Big data is a strange name to start with: when is data “big”? The only meaningful definition I can think of is “a dataset that’s large relative to the current computing and storage capacity being deployed against it” — which of course means that big data has always been with us, and indeed always will be. It also suggests that data might become less “big” if we become sufficiently interested in it to deploy more computing power to processing it. The alternative term popular in some places, data science, is equally tautologous, as I can’t readily name a science that isn’t based on data. (This isn’t just academic pedantry, by the way: terms matter, if only to distinguish what topics are, and aren’t, covered by big data/data science research.) It’s worth reviewing what big data lets us do. Having more data is useful when looking for patterns, since it makes the pattern stand out from the background noise. Those patterns in turn can reveal important processes at work in the world underlying the data, processes whose reach, significance, or even existence may be unsuspected. There may be patterns in the patterns, suggesting correlation or causality in the underling processes, and these can then be used for prediction: if pattern A almost always precedes pattern B in the dataset, then when I see a pattern A in the future I may infer that there’s an instance of B coming. The statistical machine learning techniques that let one do this kind of analysis are powerful, but dumb: it still requires human identification and interpretation of the underlying processes to to conclude that A causes B, as opposed to A and B simply occurring together through some acausal correlation, or being related by some third, undetected process. A data-driven analysis won’t reliably help you to distinguish between these options without further, non-data-driven insight. Are there are cases in which less data is better? Our experience with situation recognition certainly suggests that this is the case. When you’re trying to relate data to the the real world, it’s essential to have ground truth, a record of what actually happened. You can then make a prediction about what the data indicates about the real world, and verify that this prediction is true or not against known circumstances. Doing this well over a dataset provides some confidence that the technique will work well against other data, where your prediction is all you have. In this case, what matters is not simply the size of the dataset, but its relationship to another dataset recording the actual state of the world: it’s the richness that matters, not strictly the size (although having more data to train against is always welcome). Moreover, rich connections may help with the more problematic part of data science, the identification of the processes underlying the dataset. While there may be no way to distinguish causality from correlation within a single dataset — because they look indistinguishably alike — the patterns of data points in the one dataset may often be related to patterns and data points in another dataset in which they don’t look alike. So the richness provides a translation from one system to another, where the second provides discrimination not available in the first. I’ve been struggling to think of an example of this idea, and this is the best I’ve come up with (and it’s not all that good). Suppose we have tracking data for people around an area, and we see that person A repeatedly seems to follow person B around. Is A following B? Stalking them? Or do they live together, or work together (or even just close together)? We can distinguish between these alternatives by having a link from people to their jobs, homes, relationships and the like. There’s a converse concern, which is that poor discrimination can lead to the wrong conclusions being drawn: classifying person B as a potential stalker when he’s actually an innocent who happens to follow a similar schedule. An automated analysis of a single dataset risks finding spurious connections, and it’s increasingly the case that these false-positives (or -negatives, for that matter) could have real-world consequences. Focusing on connections between data has its own dangers, of course, since we already know that we can make very precise classifications of people’s actions from relatively small, but richly connected, datasets. Maybe the point here is that focusing exclusively on the size of a dataset masks both the advantages to be had from richer connections with other datasets, and the benefits and risks associated with smaller but better-connected datasets. Looking deeply can be as effective (or more so) as looking broadly.

Some improvements to SleepySketch

It’s funny how even early experiences change the way you think about a design. Two minor changes to SleepySketch have been suggested by early testing.

The first issue is obvious: milliseconds are a really inconvenient way to think about timing, especially when you’re planning on staying asleep for long periods. A single method in SleepySketch to convert from more programmer-friendly days/hours/minutes/seconds times makes a lot of difference.

The second issue concerns scheduling — or rather regular scheduling. Most sampling and communication tasks occur on predictable schedules, say every five hours. In an actor framework, that means the actor instance (or another one) has to be re-scheduled after the first has run. We can do this within the definition of the actor, for example using the post() action:

class PeriodicActor : public Actor {
   void post();
   void behaviour();
}

...

void PeriodicActor::post() {
   Sleepy.scheduleIn(this, Sleepy.expandTime(0, 5));
}

(This also demonstrates the expandTime() function to re-schedule after 0 days and 5 hours, incidentally.) Simple, but bad design: we can’t re-use PeriodicActor on a different schedule. If we add a variable to keep track of the repeating period, we’d be mixing up “real” behaviour with scheduling; more importantly, we’d have to do that for every actor that wants to run repeatedly.

A better way is to use an actor combinator that takes an actor and a period and creates an actor that runs first re-schedules the actor to run after the given period, and then runs the underlying actor. (We do it this way so that the period isn’t affected by the time the actor actually takes to run.)

Actor *a = new RepeatingActor(new SomeActor(), Sleepy.expandTime(0, 5));
Sleepy.scheduleIn(a, Sleepy.expandTime(0, 5))

The RepeatingActor runs the behaviour of SomeActor every 5 hours, and we initially schedule it to run in 5 hours. We can actually encapsulate all of this by adding a method to SleepySketch itself:

Sleepy.scheduleEvery(new SomeActor(), Sleepy.expandTime(0, 5));

to perform the wrapping and initial scheduling automatically.

Simple sleepy sketches can now be created at set-up, by scheduling repeating actors, and we can define the various actors and re-use them in different scheduling situations without complicating their own code.

Radio survey

A simple radio survey establishes the ranges that the radios can manage.

The 2mW XBee radios we’ve got have a nominal range of 100m — but that’s in free air, with no obstructions like bushes, ditches, and houses, and not when enclosed in a plastic box to protect them from the elements. There’s a reasonable chance that these obstacles will reduce the real range significantly.

Arduino, radio, batteries, and their enclosure in the field (literally)

A radio survey is fairly simple to accomplish. We load software that talks to a server on the base station — something as simple as possible, like sending a single packet with a count every ten seconds — and keep careful track of the return values coming back from the radio library. We then use the only output device we have — an LED — to indicate the success or failure of each operation, preferably with an indication of why it failed if it did. (Three flashes for unsuccessful transmission, five for no response received, and so forth.) We then walk away from the base station, watching the behaviour of the radio. When it starts to get errors, we’ve reached the edge of the effective range.

With two sensor motes, we can also check wireless mesh networking. If we place the first mote in range of the base station, we should then be able to walk further and have the second mote connect via the first, automatically. That’s the theory, anyway…

(One extra thing to improve robustness: if the radios lose connection or get power-cycled, they can end up on a different radio channel to the co-ordinator. To prevent this, the radio needs to have an ATJV1 command issued to it. The easiest way to do this is at set-up, through the advanced settings in X-CTU.)

The results are fairly unsurprising. In an enclosure, in the field, with a base station inside a house (and so behind double glazing and suchlike) the effective range of the XBees is about 30—40m — somewhat less than half the nominal range, and not really sufficient to reach the chosen science site: another 10—20m would be fine. On the other hand, the XBees mesh together seamlessly: taking a node out of range and placing another between it and the base station connects the network with no effort.

This is somewhat disappointing, but that’s what this project is all about: the practicalities of sensor networking with cheap hardware.

There are several options to improve matters. A higher-powered radio would help: the 50mW XBee has a nominal range of 1km and so would be easily sufficient (and could probably be run at reduced transmission power). A router node halfway between base station and sensors could extend the network, and the cost of an additional non-sensing component. Better antennas on the 2mW radios might help too, especially if they could be placed outside the enclosure.

It’s also worth noting that the radio segment is horrendously hard to debug with only a single LED for signalling. Adding more LEDs might help, but it’s still a very poor debugging interface, even compared to printing status messages to the USB port.