Research fellowship in sensor networks

We have a three-year postdoc available immediately to work on programming languages and platforms for sensor networks.

Research Fellow – CD1060

School of Computer Science, £32,751 - £35,788 per annum. Start: As soon as possible, Fixed Term 3 years We seek a Research Fellow to design and implement an integrated software platform based on mission specifications and evolution operators. The work will be evaluated based on case study deployments in the context of real-world large-scale WSANs.  You will specifically focus on generative programming techniques to integrate the overall design, and will work with Professors Dearle and Dobson. The project involves re-architecting WSAN systems so that system-wide behaviour is defined using explicit mission specifications. These allow top-level constraints and trade-offs to be captured directly and used to inform software deployment and evolution in a well-founded manner. We compile mission-level components to collections of node-level components connected using network overlays. We maintain both mission constraints and management interfaces through to run-time where they can be manipulated by evolution and recomposition operators. You should have a good honours degree in Computer Science or a related discipline, and preferably have, or be about to obtain, a PhD in Computer Science. You will have strong software development/OS/programming language skills. Experience in generative programming, compilers, operating systems, component deployment and/or sensor networks would be advantageous.  You should be a highly motivated individual and be able to lead the day-to-day work. This is a fixed-term post for 3 years, starting as soon as possible. More information on the university’s job page. You can also email Al or myself for more information.

An issue for smart grids

Unbeknown to her — until she reads this, anyway — the other day my mother trashed an idea that’s been a cornerstone of a lot of research on smart grids. In the UK the fire services often send people around to check people’s smoke alarms and the like. Not usually firemen per se, but information providers who might reasonably be described as the propaganda department of the fire service, intent on giving advice on how not to burn to death. They also change batteries. Pretty useful public service, all told. Anyway, my mother lives in Cheshire, and recently had a visit from two such anti-fire propagandists.They did the usual useful things, but also got talking about the various risk factors one can avoid beyond the usual ones of having a smoke alarm and not searching for gas leaks with a cigarette lighter. The conversation turned to the subject of appliances, and they revealed that the most dangerous appliances from a fire-causing perspective are washing machines. In fact, they said, the Cheshire fire service gets called to more washing-machine fires than any other kind of domestic fire. (I don’t know if that includes hoaxes, which are a major problem.) Since they have in common (a) lots of current and (b) water, I would guess that dishwashers are a similar problem. So their advice was never to run a washing machine or dishwasher overnight or when in bed, as the chances of a fire are relatively high. “Relatively high” probably still means low on any meaningful scale, but it makes sense to minimise even small hazards when the costs are potentially to catastrophic. Mum related this to me to encourage me also not to run appliances at night. But of course this has research consequences as well. Smart grids are the application of information technology to the provision and management of electricity and (to a lesser extent) gas. The idea is that the application of data science can provide better models of how people use their power, and can allow the grid operators and power generators to schedule and provision their supplies more accurately. It usually involves more detailed monitoring of electricity usage, for example using an internet-connected smart meter to log and return the power usage profile instead of just aggregated power usage for billing. The idea is getting more and more common because of the rise in renewable energy. Most countries have feed-in tariffs for the grid that power generators have to pay. The scheme is usually some variant of the following: at every accounting period (say three hours), each generator  has to present an estimate of the power it will generate in the next several accounting periods (say three).  So using these numbers, every three hours an electricity generator has to say how much power it will inject into the grid in the next nine hours. There’s a complementary tariff scheme for aggregate consumers (not individuals), and taken together these allow the grid operators to balance supply and demand. The important point is that this exercise has real and quantifiable financial costs: generators are charged if they over- or under-supply by more than an agreed margin of error. Now this is fine if you run a gas-, oil- or nuclear-powered power station. However, if you run a wind farm or a tidal barrage, it’s rather more tricky, since you don’t know with any accuracy how much power you’ll generate: it depends on circumstances outwith your control. (I did some work for a company making control systems for wind farms, and one of their major issues was power prediction.) The tariffs can be a show-stopper, and can cause a lot of renewable-energy generators to run significantly below capacity just to hedge their tariff risk. The other side of smart grids is to manage demand. It’s well-known that demand is spiky, for example leaping a half-time in a popular televised football match as everybody puts the kettle on. A major goal of smart grids is to smooth-out demand, and one of the ways to do this is to identify power loads than can be time-shifted: they are relatively insensitive to when they occur, and so can be moved so that they occur at times when the aggregate power demand is less. In a domestic setting, some kinds of storage heating work like this and can create and store heat during off-peak hours (overnight). Lights and television can’t be time-shifted as they’re needed at particular times. So what are the major power loads, other than storage heating, in domestic settings that can apparently be time-shifted? Washing machines and dishwashers. Except we now know that time-shifting them to overnight running runs exactly counter to fire service advice as it increases the dangers of domestic fires. So one of the major strategies for smart grid demand management would, if widely deployed, potentially cause significant losses, of property and even lives. Reducing energy bills will (in time) increase the insurance premiums for anyone allowing time-shifting of their main appliances. In other words, while these risks exist, its a non-starter. In some ways this is a good thing: good to learn about now, anyway, before too much investment. There are a lot of things that could be done to ameliorate the risks, for example designing machines explicitly designed for time-shifted operation. But I think a more pertinent observation is the holistic nature of this kind of pervasive computing system. You can’t treat any one element in isolation, as they all interact with each other. It’s as though pervasive computing breaks the normal way we think of computing systems as being built from independent components. In pervasive computing the composition operators are non-linear: two independently-correct components or solutions do not always compose to form one that is also correct. This has major implications for design and analysis, as well as for engineering. Thanks, mum!

Why do people go to university?

Changes to admissions systems in the UK and Ireland simply tinker with the existing approach. They don’t address the more fundamental changes in the relationship between university education, economic and social life. We’re in the middle of a lot of changes in academia, with the fees increase for students in England eliciting a response in Scotland, and a report in Ireland suggesting changes to the “points” system of admissions to introduce more targeting. Despite the upsets these changes cause to academics — they heighten the arbitrary and distressing discontinuity in costs between Scottish and English students in St Andrews, for example — they’re actually quite superficial in the sense of not massively changing the degrees on offer or their structure. However, these would seem to be the areas in which change could be most fruitful, in response to changing patterns of life, patterns of access to information and the like. It’s hard to generate enthusiasm for this amid the problems of managing the financial structures. In other words, the fees débâcle may be masking changes that we would otherwise be well advised to make. What are these changes? First of all we need to understand what we believe about the lives and futures of the people who might attend university. For me this includes the following: People don’t know what they’re passionate about. Passionate enough to want to spend a substantial portion of their lives on it, that is. There’s a tendency to follow an “approved” path into a stable career, and this in may in some cases lead people to do the “wrong” degree as they worry about their job prospects. But if you’re going into a career in something, you have to accept that you’ll spend about half your waking life on it. It makes sense to be sure you’ll enjoy it. So we need to focus on allowing people to find their passions, which argues against too-early specialisation and for a broader course of study. My first postdoc supervisor, Chris Wadsworth, told me about 20 years ago that “it takes you 10 years to decide what you’re actually interested in.” In your mid-20s you tend to disregard statements like this and assume you know what your research interests are, but on reflection he was right: it did take me about 10 years to work out what I wanted to spend my career researching, and it wasn’t really what I was doing back then: related, yes, but definitely off to one side. I’ve also become interested in a whole range of other things that were of no interest to me back then, not least because most of them didn’t exist. If that’s true of an academic, it’s just as true of an 18-year-old undergraduate. You can have an idea what you like and what interests you, but not much more than that. We can’t teach enough. It used to be simpler: go to university, learn all the things you’ll need, then go and practice those skills with marginally upgrading for the rest of your career. I can’t think of many topics like that any more. This changes the emphasis of education: it’s not the stuff we teach that’s important, it’s the ability to upskill effectively. For that you need foundations, and you need to know the important concepts, and you need to be able to figure out the details for yourself. It’s not that these details aren’t important — in computing, for example, they’re critical — but they’re also changing so fast that there’s no way we could keep up. And in fact, if we did, we’d be doing the students a dis-service by suggesting that this isn’t a process of constant change. The jobs that people will want to do in 20 years’ time don’t exist now. Fancy a career as a web designer? Didn’t exist 20 years ago; 10 years ago it was a recognised and growing profession; lately it’s become part and parcel of graphic design. The world changes very rapidly. Even if the job you love still exists, there’s a good chance you’ll want to change to another one mid-career. Again, the ability to learn new skills becomes essential. I suspect a lot of people — including politicians — haven’t appreciated just how fast the world is changing, and that the pace of change is accelerating. You don’t have to believe in the Sigularity to believe that this has profound implications for how economies and careers work. We don’t know where the future value comes from. In a time of increased financial stress, governments fall back on supporting courses that “obviously” support economic growth: the STEM subjects in science, engineering, technology and medicine. The only problem with this as an argument is that it’s wrong. Most of the value in the digital age hasn’t come from these areas. The profits at Apple and Google pale into insignificance behind the aggregate profits of the companies (often much smaller) making content to add value to the devices and services these companies provide. I’ve argued before that this value chain is best supported by humanities graduates, not scientists and engineers. (If you want a supporting example, consider this as a proposition: the whole argument about network neutrality is essentially an argument about whether ISPs should be allowed to tax the producers and consumers of the content they transmit in terms of its end-user value, rather than in terms of its transmission cost. That’s where the value is.) Does the above have anything to suggest about changes in admissions or the structure of degrees. To me it suggests a number of things. Have broad introductory years in which students are encouraged to explore their wider interests before specialising. (The Scottish broad curriculum attempts to do this, but in sciences we don’t do it particularly well.) Focus teaching on core principles and on how these affect the evolution of tools and techniques. Also focus on students learning the tools and techniques themselves, and showing how they relate back to the taught core. Generate a set of smaller degree-lets that people can take over their career with less commitment: at a distance, in the evening, in compressed blocks, over a long period — whatever, just suitable for people to do alongside working at something else. Above all, don’t assume we (either the universities or the State) can pick winners in terms of subjects. We’re definitely in a post-industrial world, that means new intellectual territory and that what worked before won’t always work in the future. I hope computer science will continue to change the world, but I’m also completely confident that a history graduate will come up with one of the Next Big Things.

On not funding the arts and humanities

If we don’t adequately fund the arts, where will all the digital content come from? Recent noises from within the UK’s funding structures suggest that the future for arts and humanities education is somewhat threatened. In a time of restricted resources (the argument goes) the available funding needs to be focused on topics that make a clear, traceable contribution to the national economy. This essentially means supporting the STEM subjects — science, technology, engineering and medicine — at the expense of the arts and humanities. As a computer scientist I might be expected to be loosely in favour of such a move: after all, it protects my discipline’s funding (at least partially). But this is to mis-understand the interconnected nature of knowledge, of scholarship, and of the modern world as a whole. We need first to think about how people use their degrees. Contrary to popular belief (even amongst students), degrees don’t generally lead to jobs — and nor should they. It’s true that we teach a lot of information and skills in a degree: how to program, how to analyse algorithms and understand different technologies, in the case of computer science. But this isn’t the reason to get a degree. What we try to teach are the critical skills needed to understand the world, contribute to it and change it. Computer science is a great example of this. Three years ago there were no tablet computers and no cloud computing: the field changes radically even on the timescales of a typical degree programme. So there’s not really much point in focusing on particular technologies or languages. What we teach instead is how to learn new languages and technologies, and how to assess how they fit into the changing pattern of computer science. Put another way, we have to turn them into people who can learn and assimilate complex technological ideas throughout their lives, and create new ones

Education is what survives when what has been learnt has been forgotten B.F. Skinner
This is even more true in humanities. Most people who study geography do not become geographers (or polar explorers, for that matter): they go into fields that require critical minds who can come to grips with complex ideas. But they bring to these jobs an appreciation of a complex and layered subject, an ability to deal with multiple simultaneous constraints and demands upon shared resources, the interaction of people with the natural world, and so forth. This is much more valuable than the specific knowledge they may have acquired: they have the ability to acquire specific knowledge whenever they need it, and to fit it into the wider scheme of their understanding. But even if we accept in part the narrower view of education as a direct feeder for the economy — and realistically we have to accept it at least to some extent — reducing humanities graduates seems shortsighted. If we also accept that the future is of a digital and knowledge economy, then the technologies underlying this economy are only one part of it — and probably only a small part. The rest, the higher-value services, come from content and applications, not directly from the technology. Consider how much value has been created from building computers. Now consider how much value is created from selling things that use computers. Computer scientists didn’t create much of the latter; nor did physicists, mathematicians, materials scientists or electronic engineers. Humanities people did. So even aside from the reduction in quality of life that would come from reducing the contributions of people who’ve studied history and literature, there’s a direct economic effect in play. Without such people, there’ll be no-one to create the digital content and services on which the knowledge economy depends. (It’s called knowledge economy, remember, not science economy.) Increasing the proportion of knowledgeable, educated people is valuable per se for the creativity those people unleash. The fact that we perhaps can’t directly trace the route from university student places to value to society doesn’t make that contribution unreal: it just means we’re not measuring it. When we went about inventing computers and the internet we had specific goals in mind, to do with scientific analysis and communication. But it turned out that the most socially significant impacts of these technologies didn’t come from these areas at all: they came from people who thought differently about the technology and came up with applications that no computer scientist would ever have thought of. It still amazes me that no professional computer scientist — including me — ever dreamed of social network sites like Facebook, even though we happily talked about concepts like “social network” and of using computers to examine the ways people interacted at least five years before it debuted. We don’t have the mindset to come up with these sorts of applications: we’re too close to the technology. Scientists can happily develop services for science: it needs people who are closer to the humanities to develop services for humanity.

Metrics and the Research Evaluation Framework

Is the success of the UK and US in global university research rankings any more than a consequence of the choice of particular metrics? And does this explain the design of the funding and evaluation methodologies? We’re currently at the start of the UK’s periodic Research Evaluation Framework (REF) process, during which every School and Department in every university in the country is evaluated against every other, and against international benchmarks, to rate the quality of our research. As one might imagine, this isn’t the most popular process with the academics involved (especially those like me who are directors of research and so have to manage the return of our School’s submission). It certainly absorbs a lot of intellectual capacity. In principle it shouldn’t: the REF is intended to generate a fair reflection of a School’s behaviour, is supposed to measure those activities that academics working at the cutting edge would do anyway, and to encourage people to adopt best practices by rewarding them in the assessment methodology. In reality of course the stakes are so high that the process can distort behaviour, as people (for example) target their published work at venues that will be good in the REF (to get a better REF result) and not necessarily those that would give the work the best exposure in the appropriate community. In principle this sort of arbitrage is impossible as the REF should value most highly those venues that are “best” — a sort of academic version of the efficient markets hypothesis. In reality there’s often a difference, or at least a perceived difference. In computer science, for example, we often use conferences in preference to journals for really new results because they get the word out faster into the community. Other sciences don’t do this, so we worry that conferences (which in other fields are quite peripheral to the publications process) won’t be judged as seriously even though they might be scientifically more valuable. Managing our engagement with the REF process has made me think more widely about what the REF is trying to achieve, and how it’s achieving it. In particular, are the outcomes the REF encourages actually the ones we should be encouraging? If we look at universities across the world, one thing that stands out is the way that, in science and technology at any rate,  the UK comes a close second to the US (and way above all other countries) in terms of publications, citations, patents and other concrete outputs of research — despite the smaller population and even smaller proportional science budget. Despite the perpetual worries about a brain-drain of the best scientists to better-funded US Schools, the top UK institutions continue to do amazingly well. The US and the UK also have strikingly similar funding models for universities. A small number of funding agencies, mostly national, disburse funding to Schools in particular fields according to priorities that are set at least notionally by the researchers (known in the UK as the Haldane Principle) but also and increasingly with regard to nationally-defined “strategic priorities” (usually in defence or security in the US). The agencies are constantly pressured about the value they return for taxpayers’ money — disproportionately pressured, given the relatively small sums involved — and so tend to adopt risk-reduction strategies such as following their money and increasing the funding of individuals and groups who have had previous funding and have not made a mess of it. This has three consequences: it becomes harder for junior staff to make their names on their own; it concentrates funding in a few institutions that are able to exploit their “famous names” to acquire follow-on funding; and, over time, it subtly encourages ambitious and talented people to gravitate to these well-funded institutions. The medium-term result is the creation of larger research-intensive Schools that absorb a relatively large proportion of the available funding (and to a lesser extent talent). One can argue about whether this is a good outcome or not. Proponents would claim that it generates a “critical mass” of researchers who can work together to do better work. Critics would argue that larger Schools can rapidly become too large for comfortable interaction, and that the focus needed to grow can come at the expense of cross-disciplinary research can discourage people from undertaking risky projects, since these might damage (or at least not advance) the School’s collective reputation. Do these two processes — high impact and winner-takes-most funding — run together? Is one causative of the other? It seems to be that they’re actually both a consequence of a more basic decision: the choice of metrics for evaluation. And this in turn tells us where the REF fits in. The UK and US systems have chosen specific, measurable outcomes of research, in terms of a broadly-defined international “impact”. Given this choice, the funding model makes perfect sense: find groups that perform well under these metrics, fund them more (to get more good results) and help them to grow (to create more people who do well under the metrics). The system then feeds-back on itself to improve the results the country gets under it’s chosen framework for evaluation. This all seems so logical that it’s worth pointing out that there’s nothing inherently “right” about this choice of metrics, that there are alternatives, and that the current system is in conflict with other parts of universities’ missions. A noticeable feature of the US and UK system is that they are increasingly two-tier, with research-led institutions and more general teaching institutions doing little or no research — even in fields that don’t actually benefit as much from economies of scale or critical masses of researchers (like mathematics). This means that students at the teaching institutions don’t get exposure to the research leaders, which is one of the main benefits of attending a research-led university. This is bound to have an impact on student satisfaction and achievement — and is explicitly excluded from the REF and similar metrics. It’s interesting to note that, in the university rankings that include student-centred metrics such as those performed in the UK by the Sunday Times and the Guardian, the sets of  institutions at the top are shuffled compared to the research-only rankings. (St Andrews, for example, does enormously better in surveys that take account of the student experience over those that focus purely on research. We would argue this is a good thing.) If we accept the choice of metrics as given, then experience seems to suggest that the UK’s evaluation and funding structures are optimised to deliver success against them. This is hardly surprising. However, one could also choose different sets of equally defensible metrics and get radically different results. If one takes the view, for example, that the desirable metric is to expose as many students as possible to a range of world-class researchers, one could set up a system which eschews critical mass and instead distributes researchers evenly around a country’s institutions. This is in fact roughly what happens in France and Italy, and while I can’t say that this is a direct consequence of a deliberate choice of metrics, it’s certainly consistent with one.

There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
Like many things, this is actually about economics: not in the narrow sense of money, but in the broad sense of how people respond to incentives. Rather than ask how well we do under REF metrics, we should perhaps ask what behaviours the REF metrics incentivise in academics and institutions, and whether these are the behaviours that are of best overall social benefit to the country. Certainly scientific excellence and impact are of vital importance: but one can also argue that the broadest possible exposure to excellence, and the motivational effect this can have on a wider class of students, is of equal or greater importance to the success and competitiveness of the country. The narrow, research-only metrics may simply be too narrow, and it’d be a mistake to optimise against them if in doing so — and achieving “a good result” according to them — we destroyed something of greater value that simply wasn’t being measured.