Skip to main content

The edge of computer science

Where does mathematics end and computer science begin?

I don't seem to have posted to this blog recently, so let's re-start with a big question: where is the edge of computer science? That is to say, what separates it from maths? How do mathematicians and computer scientists see problems differently, and indeed select differently what constitutes an interesting problem? This has been on my mind recently because of some work I've been doing with my student Saray on adaptive complex networks. A complex network is one that has statistical regularity in the distribution of the wires, or links, between its nodes. The internet is a complex network where the links obey a power-law distribution: a small number of sites (Yahoo, Google, IBM, ...) have a huge number of links to them, while the majority (this site) have almost none. Complex networks are created naturally by lots of processes, and are useful for describing a whole range of phenomena. (An accessible introduction is Albert-László Barabási. Linked: how everything is connected to everything else, and what it means for business and daily life. 2003.) An adaptive complex network is one where the way the network is wired changes with time. A good example is a meeting-with-friends network where there is a link between you and those people you meet in a particular timeframe. You might change the people you meet if you discover that one of them is ill, for example, so the the friend-meeting network would be re-wired to remove links to your sick friends. If we were to model the spread of an illness through this network, there would be two processes at work: a spreading process that made people who met sick people ill themselves (with some probability); and a re-wiring process that tried to remove links to those friends known to be sick (again perhaps with some probability). Our paper (Saray Shai and Simon Dobson. Complex adaptive coupled networks. Physical Review E 87(4). 2013.) shows how there are unsuspected subtleties in the way spreading processes work on such networks, where common simplifications can actually hide crucial features of the network's behaviour. The literature on network science is full of papers analysing such processes. Typically the analysis is both analytic and numerical. That is to say, a mathematical model is developed that describes the state of the network after lots of time has passed (its equilibrium behaviour); and numerical simulation is then performed by creating a large number of networks, running the spreading processes on them, and seeing whether the results obtained match the analytical model. (It was an unexpected mis-match between analytical and numerical results that led us to the main result reported in our paper.) Typically the community finds analytical results more interesting than numerical results, and with good reason: an analytic result provides both a final, closed-form solution that can be used to describe any network with particular statistical properties, without simulation; and it often also provides insight into why a given equilibrium behaviour occurs. These are the sorts of general statements that can lead to profound understanding of wide ranges of phenomena. There's a sting in the tail of analysis, however, which is this. In order to be able to form an analytic model, the process being run over the network has to be simple enough that the mathematics converges properly. A typical analysis might use a probabilistic re-wiring function, for example, where nodes are re-wired with a fixed probability, or one that varies only slowly. Anything more complex than this defeats analysis, and as a result one never encounters anything other than simple spreading processes in the literature. As a computer scientist rather than a mathematician I find that unsatisfying, and I think my dissatisfaction may actually define the boundary between computing and mathematics. The boundary is the halting problem -- or, more precisely, sustaining your interest in a problem once you've hit it. The halting problem is one of the definitive results in computer science, and essentially says that there are some problems for which it's impossible to predict ahead of time whether they'll complete with a solution or simply go on forever. Put another way, there are some problems where the only way to determine what the solution is is to run a piece of code that computes it, and that may or may not deliver a solution. Put yet another way, there are problems for which the code that computes the solution is the most concise description available. What this has to do with complex systems is the following. When a computer scientist sees a problem, they typically try to abstract it as far as possible. So on encountering a complex network, our first instinct is to build the network and then build the processes running on it as separate descriptions that can be defined independently. That is, we don't limit what kind of functions can hang off each node to define the spreading process: we just allow any function -- any piece of code -- and then run the dynamics of the network with that code defining what happens at each node at each timestep. The immediate consequence of this approach is that we can't say anything a priori about the macroscopic properties of the spreading process, because to do so would run into the fact that there isn't anything general one can say about an arbitrary piece of code. The generality we typically seek precludes our making global statements about behaviour. Mathematicians don't see networks this way, because they want to make precisely the global statements that the general approach precludes -- and so don't allow arbitrary functions into the mix. Instead they use functions that aggregate cleanly, like fixed-probability infection rates, about which one can make global statements. One way to look at this is that well-behaved functions allow one to make global statements about their aggregate behaviour without having to perform any computation per se: they remain within an envelope whose global properties are known. A mathematician who used an ill-behaved function would be unable to perform analysis, and that's precisely what they're interested in doing, even though by doing so they exclude a range of possible network behaviours.In fact, it's worse than that: the range of behaviours excluded is infinite, and contains a lot of networks that seem potentially very interesting, for example those whose behaviours depend on some transmitted value, or one computed from values spread by the process. So a mathematician's (at least as represented in most of the complex systems literature) interest in a problem is lost at precisely the point that a computer scientist's interest kicks in: where the question is about behaviour of arbitrary computations. The question this leads to is, what model do real-world networks follow more closely? Are they composed of simple, well-behaved spreading processes? Or do they more resemble arbitrary functions hanging off a network of relationships, whose properties can only be discovered numerically? And what effect does the avoidance of arbitrary computation have on the choice of problems to which scientists apply themselves? Perhaps the way forward here is to try to find the boundary of the complexity of functions that remain analytic when used as part of a network dynamics, to get the best of both worlds: global statements about large categories of networks, without needing numerical simulation of individual cases. Such a classification would have useful consequences for general computer science as well. A lot of the problems in systems design come from the arbitrariness of code and its interactions, and from the fact that we're uncomfortable restricting that generality a priori because we don't know what the consequences will be for the re-usability and extensibility of the systems being designed. A more nuanced understanding of behavioural envelopes might help.

Mussolini

Mussolini

Richard J.B. Bosworth

2002


I've wanted to read a biography of Mussolini for a while, and this one is very good. A reviewer quote on the cover describes it as "lucid, elegant, and a pleasure to read," and I'd have to agree. It's somewhat more "literary" than some biographies, and as a result doesn't always cover the historical context as well as one might like: the author's description of the March on Rome, for example, is extremely brief despite it's significance for Mussolini's rise. In this way it's not like many Hitler biographies (for example Hitler or Hitler and Stalin: Parallel Lives) which are as much about events as personality. One also has to get past the author's repetition of words like "euphonious" and "lucubrations", which get tiresome after a while. Having said all that, this is an excellent biography, full of insight and pointers to other sources (with over 80 pages of footnotes), and is a good overview to the career of someone often written off too quickly.

4/5. Finished 10 July 2013.

(Originally published on Goodreads.)

Representing samples

Any sensor network has to represent sampled data somehow. What would be the most friendly format for so doing? Re-usable software has to take an extensible view of how to represent data, since the exact data that will be represented may change over time. There are several approaches that are often taken, ranging from abstract classes and interfaces (for code-based solutions) to formats such as XML for data-based approaches. Neither of these is ideal for a sensor network, for a number of reasons. A typical sensor network architecture will use different languages one the sensors and the base station, with the former prioritising efficiency and compactness and the latter emphasising connectivity to the internet and interfacing with standard tools. Typically we find C or C++ on the sensors and Java, JavaScript, Processing, or some other language on the base station. (Sometimes C or C++ too, although that's increasingly rare for new applications.) It's therefore tricky to use a language-based approach to defining data, as two different versions of the same structure would have to be defined and -- more importantly -- kept synchronised across changes. That suggests a data-based approach, but these tend to fall foul of the need for a compact and efficient encoding sensor-side. Storing, generating, and manipulating XML or RDF, for example, would typically be too complex and too memory-intensive for a sensor. These formats also aren't really suitable for in-memory processing -- unsurprisingly, as they were designed as transfer encodings, not primary data representations. Even though they might be attractive, not least for their friendliness to web interactions and the Semantic Web, they aren't really usable directly. There are some compromise positions, however. JSON is a data notation derived initially from JavaScript (and usable directly within it) but which is sufficiently neutral to be used as an exchange format in several web-based systems. JSON essentially lets a user form objects with named fields, whose values can be strings, numbers, arrays, or other objects. (Note that this doesn't include code-valued fields, which is how JSON stays language-neutral: it can't encode computations, closures, or other programmatic features.) JSON's simplicity and commonality have raised the possibility of using it as a universal transport encoding: simpler than XML, but capable of integration with RDF, ontologies, and the Semantic Web if desired. There are several initiatives in this direction: one I came across recently is JSON-LD (JSON for Linked Data) that seeks to integrate JSON records directly into the linked open data world. This raises the possibility of using JSON to define the format of sensor data samples, sample collections (datasets), and the like, and linking those descriptions directly to ontological descriptions of their contents and meaning. There are some problems with this, of course. Foremost, JSON isn't very compact, and so would require more storage and wireless bandwidth than a binary format. However, one approach might be to define samples etc in JSON format and then either use them directly (server-side) or compile them to something more static but more efficient for use sensor-side and for exchange. This would retain the openness but without losing performance.

Temperature sensors working

Temperature sensing using digital temperature sensors is easy to get working. The temperature sensing part of the project requires three sensors for ambient, high-up and low-down measurement. The DS18B20 temperature sensor seems well-suited for the job. ![DS18B20](/images/citizen-sensing/three-ds18b20.png) Three DS18B20 temperature sensors sharing a OneWire bus, standard (rail) power mode Hooking-up a OneWire bus for the three sensors lets them share a single microcontroller pin -- which isn't important for hardware reasons in this project, but also saves some microcontroller RAM, which might be. The circuit is very simple, with the three sensors sharing power and ground lines and with a common data line pulled-up to the power rail through a 4.7K resistor. The DQ line is attached to one of the Arduino's digital lines. The OneWire library is then used to instantiate a protocol handler for that line, and passed to the temperature control library to manage the interaction with the devices, including their conversion from raw to "real" temperature values. The resulting code is almost comically simple:

#include <DallasTemperature.h>

OneWire onewire(8);                  // OneWire bus on pin 8
DallasTemperature sensors(&onewire);

void setup(void) {
  Serial.begin(9600);
  sensors.begin();
}

void loop(void) {
  sensors.requestTemperatures();
  for(int i = 0; i < 3; i++) {
    float c - sensors.getTempCByIndex(i);
    Serial.print("Sensor ");   Serial.print(i);   Serial.print(" = ");
    Serial.print(c);   Serial.println("C");
  }
  delay(5000);
}
That's it! The temperature library packages everything up nicely, including the conversion and the interaction with the OneWire protocol (which is quite fiddly). ![Three DS18B30s on a prototyping shield](/images/citizen-sensing/ds18b20-prototype.jpg) One potential problem for the future is that access to the sensors is by index, not by any particular identifier, and it;s not clear whether the ordering is always the same: does the sensor closest to the microcontroller always appear as index 0, for example? If not, then we'll have to identify which sensor is which somehow to sample the temperature from the correct place, or run each one on a different OneWire bus instance. There's also an interesting point about parasite power mode, which is where the DS18B20 draws its power from the data bus rather than from a dedicated power rail. This might make power management easier, since the sensor would be unpowered when not being used, such as when the Arduino is asleep. This suggests it's probably worth looking into parasite power a bit more.