Not a work/life balance

People sometimes ask me about my work/life balance. It was then I realised I don’t have one. Maybe it’s because as a society we’ve reached a certain level of affluence, but there are ideas in the wind about the reducing importance of economic measures in our perception of countries’ successes, and instead emphasising “softer” factors like well-being and happiness. This includes supportive comments from leading economists, as well as a royal decree. This is a major change for a lot of reasons, and not one that’s universally recognised: my Chinese postdoc, for example, finds the whole notion of voluntarily restricting your own work and success completely incomprehensible. I think it’s a defensible position, as long as it doesn’t translate into offloading unpleasant tasks onto an underclass and/or increasing the existing wealth and status imbalances. But it raises some questions about how we evaluate and measure people’s happiness to see whether we’re succeeding in whatever measures we decide to deploy. This is neither an uncommon nor a non-trivial problem. It’s common because, like many things we want to sense, we can’t measure happiness directly and so have to fall back on proxies that we can measure, such as self-reporting against a standardised scale, rates of depression and the like. We then need to show that these proxies do indeed correlate with happiness in some way, which is tricky in the absence of direct measurement. An alternative approach is to avoid measurement and instead find a set of policies or ideas that should in and of themselves increase happiness. An example I’ve heard a lot of recently is work/life balance: the need to spend time away from work doing other things. Perhaps surprisingly, this idea is pushed by a lot of employers. My university, for example, runs a staff training course on striking an appropriate work/life balance. Maybe they feel they’ll get better results from knowledge workers if they’re appropriately rested; maybe it’s setting up a defence against possible future claims over undue workplace stress. (I actually think it’s the former, at least in St Andrews’ case, but the cynic in me can easily believe the latter of others.) Thinking about this made me wonder whether I actually have a work/life balance. I’ve decided that I don’t, and that, for me at least, the work/life split is a false dichotomy. That’s not to say that I do nothing but work, or that I don’t have a life, but rather that I don’t divide my life into those categories. For example, I often answer email and read journals at night and at weekends. I also spend a lot of “off” time programming, writing about and thinking about computer science. It’s certainly something that drives and compels me, to think about better ways to understand it, teach it, and develop new ways of doing it. You could say that, since I’m a computer science academic, this is all “work”. I also spend a lot of time taking photographs and thinking about photography. And a lot of that actually involves thinking about computing, since a lot of my photography is now digital. Being who I am, I also think about how to automate various photography tasks, and about different kinds of software and mathematics that could be applied to photography. Is that “work” too? There are many things I do for the university that are a pleasure. Some are such a pleasure that I’d do them anyway, such as research and (mainly) teaching; some I perhaps wouldn’t do unless it was part of the job, because I don’t enjoy the enough to seek them out, but do enjoy sufficiently that they’re not a trial: institutional strategy development, directing research and the like. On the other hand there are several things I do for the university that I really wouldn’t do for choice, mainly revolving around marking and assessment: they’re vital, and it’s vital that they’re done well, but they can’t remotely be described as enjoyable. So I suspect there’s a category error lurking here. It’s not so much that there’s “work” and “life” sitting in separate categories. Rather, there’s stuff I have to do and stuff I want to do. For many people — most, maybe — these latter categories conflate respectively with “work” and “life”, and give rise to a desire for work/life balance: avoiding doing too much stuff that you only do because you have to, to pay the bills. But if you’re fortunate enough to have a job that you love, then “work” and “life” don’t represent the way you divide-up your life.

I don’t think of work as work and play as play. It’s all living. Richard Branson
And this, I suspect, is a confusion that lies at the heart of the happiness debate. “Work” and “life” aren’t actually the categories we want to measure: they’re simply proxies acting as measurable analogues for happiness, and like any proxy they’re good for some and lousy for others. A better metric might be the “want-to/have-to” balance: the ratio of time spent on things you do because you want to do them versus time spent on things that you do because you have to or feel you should. It has the advantage of being directly measurable, broadly applicable to those who love and those who hate their jobs, and more easily believable to be a proxy for happiness. I have no idea what sort of value one would target as a “good” ratio, but at least it’s a more scientific place to start from.

The shifting balance of university power

The shifts in economic power are being mirrored in the university sector, both in education and research. It’s happened before. The global financial crisis has exposed a lot of unfunded holes in different parts of the economy, and the resulting cuts and re-prioritisations are affecting the ways in which a lot of organisations operate. Universities find themselves uncharacteristically in the front line of this process. In terms of teaching, the sudden enormous increase in fees in England is currently being resisted — futilely, I think — in Scotland. The shifting of burden onto students will have long-ranging impact because, as isn’t often realised, the increase in fees is being coupled with a projected decrease, applied differentially across subjects, in core State funding for teaching. This means that the huge influx of money from fees will be largely offset by a decrease in other funding: the universities will be no better off. In research, there is already a shift in the amounts of money available from funding agencies as well as in the ways that money is distributed. Crudely put, in future we’ll see a smaller number of larger grants awarded to larger institutions who already have significant research funding from these same funding sources: the funding bodies will follow their own money to reduce risk. We have no idea what impact these changes will have on the quality of education, research, innovation or scholarship, or on the rankings that (very imperfectly) track these features. What we do know is that they’re all intertwined, and that major shifts in the global balance of quality in education and research are not just possible, but likely. People looking at the university rankings tend to think that they reflect a long-standing, established assessment that changes only peripherally as “new” universities improve. This is actually very far from being the case. To see why, we need to consider the history of universities and their evolving quality relative to each other over the past six to seven hundred years. To simplify I’ll focus on what we might regard as the modern, Western model of universities and ignore the university-like institutions in the Islamic caliphate, the House of Wisdom and the like — although I really shouldn’t, and it’d be good to see how they fit into the story. The designation of “best university in the world,” whatever that may mean, has shifted several times. Initially it went to the University of Bologna as the first modern, Western university. But it soon shifted in the eleventh century to be the University of Paris, largely through the controversial fame of Peter Abelard — an uncharacteristically scandal-prone academic. Over the course of the next centuries the centre of the academic world moved again, to Oxford and Cambridge. So far so familiar — except that the dynamism that pushed these institutions forward didn’t sustain itself. By the late nineteenth century the centre of research and teaching in physics and mathematics had shifted to Germany — to the extent that a research career almost required a stint at a German institution. Oxford and Cambridge were largely reduced to teaching the sons of rich men. That’s not to say that the Cavendish Laboratory and the like weren’t doing excellent work: it’s simply to recognise that Germany was “where it’s at” for the ambitious and talented academic. When people think of Einstein, they mostly know that he worked for a larger part of his career in the Institute for Advanced Study at Princeton. What isn’t often appreciated is that this wasn’t the pinnacle of his career — which was in fact when he was awarded a chair at the University of Berlin. In the early twentieth century the US Ivy League was doing what Oxford and Cambridge were doing fifty years earlier: acting as bastions of privilege. It took the Second World War, the Cold War and the enormous improvements in funding, governance and access to elevate the American institutions to their current levels of excellence. All this is to simplfy enormously, of course, but the location of the pre-eminent universities has shifted enormously, far more and far faster than is generally appreciated: Italy, France, England, Germany, the US. It isn’t in any sense fixed. Many people would expect China to be next. It’s not so long ago that Chinese universities were starved of investment and talent, as the best minds came to the West. This is still pretty much the case, but probably won’t be for much longer. There are now some extremely impressive universities in China, both entirely indigenous and joint ventures with foreign institutions. (I’m involved in a project with Xi’an Jiaotong Liverpool University, a joint venture between China and the UK.) It’s only a matter of time before some of these institutions are recognised as being world-class. Whether these institutions become paramount or not depends on a lot of factors: funding, obviously, of which there is currently a glut, for facilities, faculty and bursaries. But there’s more to it than that. They have to become places where people want to live, can feel valued, and can rise to the top on their merits. You will only attract the best people if those people know their careers are open-ended and can grow as they do. The pitfalls include appealing solely to a small and privileged demographic, one selected by its ability to pay and to act as patrons to otherwise weak and under-funded institutions, and of focusing on pre-selected areas to the exclusion of others. Both these are actually symptoms of the same problem: a desire to “pick winners,” avoid risk, and score well against metrics that can never capture the subtleties involved in building world-class institutions of learning.

Call for participation: summer school in cloud computing

St Andrews is hosting a summer school in the theory and practice of cloud computing. St Andrews Summer School in Cloud Computing The Large-Scale Complex IT Systems Programme (www.lscits.org) and the Scottish Informatics and Computer Science Alliance (www.sicsa.ac.uk) are organising a summer school in June 2011 on Cloud Computing. This is aimed at PhD students who are interested in using the cloud to support their work (not just cloud computing researchers). The summer school will include presentations by eminent invited speakers, practical work on cloud development and networking opportunities. Registration cost is £500,which includes all meals and accommodation.  Funding is available from LSCITS to cover this cost for EPSRC-supported PhD students. Funding is available from SICSA for PhD students working in Scotland For more information, see our web site: http://sites.google.com/site/cloudcomputingsummerschool2011/

Some data is just too valuable

The recent furore over tracking mobile phones is a warning of what happens when sensor data goes too public. Over the past week first Apple and then Google have been caught capturing location traces from users’ phones without their knowledge or consent. The exposure has been replete with denials and nuanced responses — and indeed a not-so-nuanced response purportedly from Steve Jobs. This is almost certain to end up in court, especially in Europe. The details, both technical and managerial who-knew-what, are still unclear. It’s just about possible that one or both cases were caused by over-enthusiastic engineers without management approval, and we should of course suspend judgment until more details emerge. We can, however, explore possible motives. The Apple case involves tracking a phone over a protracted period of time and storing the resulting trace as GPS co-ordinates in an unencrypted file. This is invisible to “normal” users, but is visible when a phone has been “jailbroken,” and is sync’ed with the user’s desktop. As soon as the story broke there was an application available for Macs to overlay the trace onto a map. We did this with a colleague’s phone, and the trace goes back around a year. In places it’s surprisingly accurate; in others is shows clear signs of distortion and location artifacts. Apparently Google Android performs similar tracking, but over shorter periods and with the results stored less openly. It’s not quite clear whether the tracks recorded in either handset are forwarded back to their respective mother-ships, but that’s certainly a risk. In the Apple case, traces are collected even if location-based services have been switched off at the application level. It has to be said that this whole affair is something of a disaster, both for the companies concerned and for location-based and pervasive services in general. What the fall-out will be is unclear: will people move back to “dumb” phones to protect their privacy? Experience suggests not: the convenience is too great, and many people will conclude that the loss of privacy they’ve experienced, though unacceptable per se, is insignificant compared to the overall benefits of smartphones. Indeed, if one were being particularly conspiracy-minded one might suspect that that the whole affair is a set-up. Management (one might argue) must have known that they’d be found out, and so made sure that the tracking they were performing was benign (no transmission back to base) so that, when the story broke, people would be momentarily offended but afterwards would be inured to the idea of their phones tracking their locations (it happened before and had no bad consequences, so why worry about it?) even if those traces were later pooled centrally. To a company, some personal data is so valuable that it’s worth risking everything to collect it. The reason location traces are important has nothing to do with the exposure of the occasional adulterer or criminal, and everything to do with profiling and advertising. Big data isn’t just lots of small data collected together: it’s qualitatively different and allows very different operations to be performed. A smartphone is a perfect personal-data-collection platform, being ubiquitous, always-on and increasingly used to integrate its owner’s everyday life. The Holy Grail of mobile e-commerce is to be able to offer advertisements to people just at the moment when the products being advertised are at their most valuable. Doing this requires advertisers to profile people’s activities and interests and respond to subtle cues in real time. Imagine what a trace of my every movement does. It can show, to first order, how often I travel (and therefore my annual travel spend), what I see, and how I see it. Combined with other information like my travel history, booking references and internet search history, it can show what I’m interested in and when I’m likely to be doing it to a surprisingly high accuracy. It can identify that I commute, my home and work locations, my commute times and other routines. If I travel with other smartphone users it can identify my friends and colleagues simply from proximity — and that’s before someone mines my social network explicitly using Facebook or LinkedIn. Now consider what this profile is worth to an advertiser. Someone with access to it, and with access to the platform on which I browse the web, can inject advertisements tailored specifically to what I’m doing at any particular moment, and to what I may be doing in the future. It can offer distractions when I’m commuting, or offers for my next holiday before I leave, and group discounts if I persuade my friends to join in with some offer. Basically a location trace is almost incalculably valuable: certainly worth an initial bout of injurious comment and lawsuits if it leads to getting hands legitimately on that kind of data. With spam email one needs response rates in the fractions of percent to make a profit: with access to a detailed user profile one could get massively better responses and make a quite astonishing amount of money. Location tracing, like loyalty cards, gains most of its value from information asymmetry: the data provider thinks that a few vouchers is good value for the data being given away, even though the actual value is enormously more than that. If you’re not paying for the product, then you are the product. It’s not necessarily dystopian, may lead to better services, and may in fact be the price we pay for keeping basic services free on the web given the unexpected lack of many other business models. But it’s not something to be given away for free or without consideration of where and to whom the benefits accrue, and what a properly constituted fair trade-off would be.

Adventures at either end of the performance spectrum

Over the past week I’ve been playing with some very small machines intended as sensor network nodes. Paradoxically this has involved deploying a ridiculous amount of computing power. Most of my work on sensor networks is at the level of data and sensors, not hardware. I was feeling the need to get my hands dirty, so I bought an Arduino, an open-source prototyping platform that’s actually somewhat less capable than many of the nodes we work with. They’re basically a hobbyist platform and are often looked down upon by professionals as being toys. I think these criticisms are unfair. Firstly, Arduinos massively simplify software development by abstracting-away from a lot of the complexities that simply aren’t needed in many applications. Secondly, unlike a lot of sensor network hardware, they’re mainstream and will benefit from competition, economies of scale and the like in a way that more specialised kit probably never will. Thirdly, as the centres of an ecosystem of other boards they can focus on doing one function — co-ordination — and let the daughter boards focus on their their own functions, rather than tying everything together. In some ways this makes hardware more like software, and more amenable to software-like rapid development cycles. It means that each component can move up its own learning curve independently of the others, and not hold everything back to the speed of the slowest (and often hardest-to-improve) component. That has been the unfortunate outcome several times in the past: I’m reminded strongly of the demise of the Transputer, that lost its early lead by trying to be too integrated. (That’s an interesting story for another time.) One good example of Arduino  re-use is that it can interface to Zigbee radios, specifically to Digi’s range of XBee modules. Zigbee is the latest-and-greatest short-range wireless protocol, and Arduino kit can interface directly to it rather than relying on an integrated radio sub-system. They mesh together and do all sorts of other fun stuff that’s great for sensor systems, and I’m looking forward to understanding them better, However getting XBee radios to work often involves re-flashing their firmware to make sure they take on the appropriate role in the mesh network. The tool that Digi provide to do this (unhelpfully called X-CTU)  only runs on Windows. As might not be a complete surprise, I don’t have any Windows machines. I doubt I’m unusual in this: if you’re the sort of person who’s likely to play around with hobbyist hardware, there’s a reasonable chance that you run Linux and/or Mac OS X as your main or only operating systems. So building kit for the hobby hardware market that relies on Windows-only tools is short-sighted. And unnecessary: there are plenty of cross-platform user interface tools available for C, or they could just write it in Java. By a strange quirk I also don’t have an Intel-based Linux machine at the moment, so I was left in something of a quandary as to how to run the necessary firmware tools. Solving it takes us to the other end of the performance spectrum. The solution was to run X-CTU under the Wine emulator for Linux, with the Linux in question being a Debian installation running virtualised under VirtualBox on my Macbook Air. To put it another way, I created a virtual stand-alone PC on my Mac, within which I installed Linux, which therefore thought it was running on its own separate machine, within which I installed a Windows emulation layer and ran X-CTU — all to change the firmware on a radio with significantly less computational power than a central heating thermostat. It’s things like this that make one realise how ludicrously, insanely overpowered modern computers are. The Mac can run three-layer emulations like this without any problem at all, and can still do a load of other stuff simultaneously. And it’s a laptop, and not one noted for being massively powerful by modern standards. It seems rather perverse to need to deploy this kind of power to work with such tiny machines. I think there are several lessons. Computing power really is really, really cheap — so cheap that it’s not worth worrying about it, and we haven’t come close to hitting a plateau in practical terms yet. But this just highlights that programming sensor networks requires a completely different discipline and skill-set, which may not be common in programmers of more recent vintage. If this gap is going to remain — and I think it is — it’s something we need to consider in the ways we teach computer science.