Skip to main content

Posts about pervasive systems (old posts, page 1)

Pervasive healthcare

This week's piece of shameless self-promotion: a book chapter on how pervasive computing and social media can change long-term healthcare.

My colleague Aaron Quigley and I were asked to contribute a chapter to a book put together by Jeremy Pitt as part of PerAda, the EU network of excellence in pervasive systems. We were asked to think about how pervasive computing and social media could change healthcare. This is something quite close to both our hearts -- Aaron perhaps more so than me -- as it's one of the most dramatic examples of how pervasive computing can really make an impact on society.

There are plenty of examples of projects that attempt to provide high-tech solutions to the issues of independent living-- some of which we've been closely involved with. For this work, though, we suggest that one of the most cost-effective contributions that technology can make might actually be centred around social media. Isolation really is a killer, in a literal sense. A lot of research has indicated that social isolation is a massive risk factor in both physiological and psychological illnesses, and this is something that's likely to get worse as populations age.

Social media can help address this, especially in an age when older people have circles of older friends, and where these friends and family can be far more geographically dispersed than in former times. This isn't to suggest that Twitter and Facebook are the cures of any social ills, but rather that the services they might evolve into could be of enormous utility for older people. Not only do they provide traffic between people, they can be mined to determine whether users' activities are changing over time, identify situations that can be supported, and so provide unintrusive medical feedback -- as well as opening-up massive issues of privacy and surveillance. While today's older generation are perhaps not fully engaged with social media, future generations undoubtedly will be, and it's something to be encouraged.

Other authors -- some of them leading names in the various aspects of pervasive systems -- have contributed chapters about implicit interaction, privacy, trust, brain interfaces, power management, sustainability, and a range of other topics in accessible form.

The book has a web site (of course), and is available for pre-order on Amazon. Thanks to Jeremy for putting this together: it's been a great opportunity to think more broadly than we often get to do as research scientists, and see how our work might help make the world more liveable-in.

Some data is just too valuable

The recent furore over tracking mobile phones is a warning of what happens when sensor data goes too public.

Over the past week first Apple and then Google have been caught capturing location traces from users' phones without their knowledge or consent. The exposure has been replete with denials and nuanced responses -- and indeed a not-so-nuanced response purportedly from Steve Jobs. This is almost certain to end up in court, especially in Europe.

The details, both technical and managerial who-knew-what, are still unclear. It's just about possible that one or both cases were caused by over-enthusiastic engineers without management approval, and we should of course suspend judgment until more details emerge. We can, however, explore possible motives.

The Apple case involves tracking a phone over a protracted period of time and storing the resulting trace as GPS co-ordinates in an unencrypted file. This is invisible to "normal" users, but is visible when a phone has been "jailbroken," and is sync'ed with the user's desktop. As soon as the story broke there was an application available for Macs to overlay the trace onto a map. We did this with a colleague's phone, and the trace goes back around a year. In places it's surprisingly accurate; in others is shows clear signs of distortion and location artifacts. Apparently Google Android performs similar tracking, but over shorter periods and with the results stored less openly. It's not quite clear whether the tracks recorded in either handset are forwarded back to their respective mother-ships, but that's certainly a risk. In the Apple case, traces are collected even if location-based services have been switched off at the application level.

It has to be said that this whole affair is something of a disaster, both for the companies concerned and for location-based and pervasive services in general. What the fall-out will be is unclear: will people move back to "dumb" phones to protect their privacy? Experience suggests not: the convenience is too great, and many people will conclude that the loss of privacy they've experienced, though unacceptable per se, is insignificant compared to the overall benefits of smartphones.

Indeed, if one were being particularly conspiracy-minded one might suspect that that the whole affair is a set-up. Management (one might argue) must have known that they'd be found out, and so made sure that the tracking they were performing was benign (no transmission back to base) so that, when the story broke, people would be momentarily offended but afterwards would be inured to the idea of their phones tracking their locations (it happened before and had no bad consequences, so why worry about it?) even if those traces were later pooled centrally.

To a company, some personal data is so valuable that it's worth risking everything to collect it.

The reason location traces are important has nothing to do with the exposure of the occasional adulterer or criminal, and everything to do with profiling and advertising. Big data isn't just lots of small data collected together: it's qualitatively different and allows very different operations to be performed. A smartphone is a perfect personal-data-collection platform, being ubiquitous, always-on and increasingly used to integrate its owner's everyday life. The Holy Grail of mobile e-commerce is to be able to offer advertisements to people just at the moment when the products being advertised are at their most valuable. Doing this requires advertisers to profile people's activities and interests and respond to subtle cues in real time.

Imagine what a trace of my every movement does. It can show, to first order, how often I travel (and therefore my annual travel spend), what I see, and how I see it. Combined with other information like my travel history, booking references and internet search history, it can show what I'm interested in and when I'm likely to be doing it to a surprisingly high accuracy. It can identify that I commute, my home and work locations, my commute times and other routines. If I travel with other smartphone users it can identify my friends and colleagues simply from proximity -- and that's before someone mines my social network explicitly using Facebook or LinkedIn.

Now consider what this profile is worth to an advertiser. Someone with access to it, and with access to the platform on which I browse the web, can inject advertisements tailored specifically to what I'm doing at any particular moment, and to what I may be doing in the future. It can offer distractions when I'm commuting, or offers for my next holiday before I leave, and group discounts if I persuade my friends to join in with some offer. Basically a location trace is almost incalculably valuable: certainly worth an initial bout of injurious comment and lawsuits if it leads to getting hands legitimately on that kind of data.

With spam email one needs response rates in the fractions of percent to make a profit: with access to a detailed user profile one could get massively better responses and make a quite astonishing amount of money.

Location tracing, like loyalty cards, gains most of its value from information asymmetry: the data provider thinks that a few vouchers is good value for the data being given away, even though the actual value is enormously more than that. If you're not paying for the product, then you are the product. It's not necessarily dystopian, may lead to better services, and may in fact be the price we pay for keeping basic services free on the web given the unexpected lack of many other business models. But it's not something to be given away for free or without consideration of where and to whom the benefits accrue, and what a properly constituted fair trade-off would be.

The next challenges for situation recognition

As a pervasive systems research community we're doing quite well at automatically identifying simple things happening in the world. What is the state of the art, and what are the next steps?

Pervasive computing is about letting computers see and respond to human activity. In healthcare applications, for example, this might involve monitoring an elderly person in their home and checking for signs of normality: doors opening, the fridge being accessed, the toilet flushing, the bedroom lights going on and off at the right times, and so on. A collection of simple sensors can provide the raw observational data, and we can monitor this stream of data for the "expected" behaviours. If we don't see them -- no movement for over two hours during daytime, for example -- then we can sound an alarm and alert a carer. Done correctly this sort of system can literally be a life-saver, but also makes all the difference for people with degenerative illnesses living in their own homes.

The science behind these systems is often referred to as activity and situation recognition, both of which are forms of context fusion. To deal with these in the wrong order: context fusion is the ability to take several streams of raw sensor (and other) data and use it to make inferences; activity recognition is the detection of simple actions in this data stream (lifting a cup, chopping with a knife); and situation recognition is the semantic interpretation of a high-level process (making tea, watching television, in a medical emergency). Having identified the situation we can then provide an appropriate behaviour for the system, which might involve changing the way the space is configured (dimming the lights, turning down the sound volume), providing information ("here's the recipe you chose for tonight") or taking some external action (calling for help). This sort of context-aware behaviour is the overall goal.

The state of the art in context fusion uses some sort of uncertain reasoning including machine learning and other techniques that are broadly in the domain of artificial intelligence. These are typically more complicated than the complex event processing techniques used in financial systems and the like, because they have to deal with significant noise in the data stream. (Ye, Dobson and McKeever. Situation recognition techniques in pervasive computing: a review. Pervasive and Mobile Computing. 2011.) The results are rather mixed, with a typical technique (a naive Bayesian classifier, for example) being able to identify some situations well and others far more poorly: there doesn't seem to be a uniformly "good" technique yet. Despite this we can now achieve 60-80% accuracy (by F-measure, a unified measure of the "goodness" of a classification technique) on simple activities and situations.

That sounds good, but the next steps are going to be far harder.

To see what the nest step is, consider that most of the systems explored have been evaluated under laboratory conditions. These allow fine control over the environment -- and that's precisely the problem. The next challenges for situation recognition come directly from the loss of control of what's being observed.

Let's break down what needs to happen. Firstly, we need to be able to describe situations in a way that lets us capture human processes. This is easy to do to another human but tricky to a computer: the precision with which we need to express computational tasks gets in the way.

For example we might describe the process of making lunch as retrieving the bread from the cupboard, the cheese and butter from the fridge, a plate from the rack, and then spreading the butter, cutting the cheese, and assembling the sandwich. That's a good enough description for a human, but most of the time isn't exactly what happens. One might retrieve the elements in a different order, or start making the sandwich (get the bread, spread the butter) only to remember that you forgot the filling, and therefore go back to get the cheese, then re-start assembling the sandwich, and so forth. The point is that this isn't programming: people don't do what you expect them to do, and there are so many variations to the basic process that they seem to defy capture -- although no human observer would have the slightest difficulty in classifying what they were seeing. A first challenge is therefore a way of expressing the real-world processes and situations we want to recognise in a way that's robust to the things people actually do.

(Incidentally, this way of thinking about situations shows that it's the dual of traditional workflow. In a workflow system you specify a process and force the human agents to comply; in a situation description the humans do what they do and the computers try to keep up.)

The second challenge is that, even when captured, situations don't occur in isolation. We might define a second situation to control what happens when the person answers the phone: quiet the TV and stereo, maybe. But this situation could be happening at the same time as the lunch-making situation and will inter-penetrate with it. There are dozens of possible interactions: one might pause lunch to deal with the phone call, or one might continue making the sandwich while chatting, or some other combination. Again, fine for a human observer. But a computer trying to make sense of these happenings only has a limited sensor-driven view, and has to try to associate events with interpretations without knowing what it's seeing ahead of time. The fact that many things can happen simultaneously enormously complicates the challenge of identifying what's going on robustly, damaging what is often already quite a tenuous process. We therefore need techniques for describing situation compositions and interactions on top of the basic descriptions of the processes themselves.

The third challenge is also one of interaction, but this time involving multiple people. One person might be making lunch whilst another watches television, then the phone rings and one of them answers, realises the call is for the other, and passes it over. So as well as interpenetration we now have multiple agents generating sensor events, perhaps without being able to determine exactly which person caused which event. (A motion sensor sees movement: it doesn't see who's moving, and the individuals may not be tagged in such a way that they can be identified or even differentiated between.) Real spaces involve multiple people, and this may place limits on the behaviours we can demonstrate. But at the very least we need to be able to describe processes involving multiple agents and to support simultaneous situations in the same or different populations.

So for me the next challenges of situation recognition boil down to how we describe what we're expecting to observe in a way that reflects noise, complexity and concurrency of real-world conditions. Once we have these we can explore and improve the techniques we use to map from sensor data (itself noisy and hard to program with) to identified situations, and thence to behaviour. In many ways this is a real-world version of the concurrency theories and process algebras that were developed to describe concurrent computing processes: process languages brought into the real world, perhaps. This is the approach we're taking in a European-funded research project, SAPERE, in which we're hoping to understand how to engineer smart, context-aware systems on large and flexible scales.

Call for papers: Formal methods for Pervasive Systems

We are looking for papers on the application of formal methods to pervasive computing, for a workshop at FM'11. (Full disclosure: I'm the keynote speaker.)

Formal Methods for Pervasive Systems [Pervasive@FM2011]

A workshop held as part of FORMAL METHODS 2011

DEADLINE: 20th March 2011

WORKSHOP TOPICS

Logics, Process calculi, Automata, Specification languages, Probabilistic analysis, Model checking, Theorem-proving, Tools, Automated deduction

FOR THE

privacy, behaviour, security, reliability, interoperability, context-aware, mobility, resource requirements, temporal

ASPECTS OF

pervasive healthcare systems, sensor networks, e-commerce, cloud computing, MANETs/VANETs, telephony, device swarms, electronic tags, human-device interaction, etc.

SUBMISSIONS

Our aim is to have productive discussions and a true workshop "feel". Thus, we invite two kinds of submission:

  1. Original research papers concerning any of the above topics; or
  2. Survey papers providing an overview of some of the above topics.

Submissions should be written in English, formatted according Springer LNCS style, and not exceed 20 pages in length. Submissions must be made via Easychair.

Our aim is for an informal proceedings based on these submissions to be available during the workshop. Depending upon the success of the workshop, we intend to produce an edited book based (at least in part) upon the contributions or develop a special issue of a journal.

IMPORTANT DATES

Submission deadline:          20th March 2011 Notification of acceptance:    1st May 2011 Pre-proceedings version due:  20th May 2011 Workshop:                     20th or 21st June, 2011

INVITED SPEAKER

Simon Dobson (School of Computer Science, University of St. Andrews)

WORKSHOP CO-CHAIRS

Michael Fisher    (University of Liverpool, UK) Brian Logan       (University of Nottingham, UK)

PROGRAMME COMMITTEE

Natasha Alechina    (Nottingham, UK) Myrto Arapinis      (Birmingham, UK) Mohamed Bakhouya    (Belfort, FR) Doina Bucur         (INCAS3, NL) Michael Butler      (Southampton, UK) Muffy Calder        (Glasgow, UK) Antonio Coronato    (CNR, IT) Soren Debois        (Copenhagen, DK) Giuseppe De Pietro  (CNR, IT) Marina De Vos       (Bath, UK) Simon Dobson        (St Andrews, UK) Michael Fisher      (Liverpool, UK) Michael Harrison    (Newcastle, UK) Savas Konur         (Liverpool, UK) Brian Logan         (Nottingham, UK) Alessio Lomuscio    (Imperial, UK) Ka Lok Man          (XJTLU, CN) Julian Padget       (Bath, UK) Anand Ranganathan   (IBM, USA) Alessandro Russo    (Imperial, UK) Mark Ryan           (Birmingham, UK) Chris Unsworth      (Glasgow, UK) Kaiyu Wan           (XJTLU, CN)

STEERING COMMITTEE

Natasha Alechina    (University of Nottingham, UK) Muffy Calder        (University of Glasgow, UK) Michael Fisher      (University of Liverpool, UK) Brian Logan         (University of Nottingham, UK) Mark Ryan           (University of Birmingham, UK)

Call for papers: Programming methods for mobile and pervasive systems

We are looking for papers on programming models, methods and tools for pervasive and mobile systems, for a workshop at the PERVASIVE conference in San Francisco.

2nd International Workshop on Programming Methods for Mobile and Pervasive Systems (PMMPS)

http://www.pmmps.org

San Francisco, California, USA, June 12, 2011. Co-located with PERVASIVE 2011

Background

Pervasive mobile computing is here, but how these devices and services should be programmed is still something of a mystery. Programming mobile and pervasive applications is more than than building client-server or peer-peer systems with mobility, and it is more than providing usable interfaces for mobile devices that may interact with the surrounding context. It includes aspects such as disconnected and low-attention working, spontaneous collaboration, evolving and uncertain security regimes, and integration into services and workflows hosted on the internet. In the past, efforts have focused on the form of human-device interfaces that can be built using mobile and distributed computing tools, or on human computer interface design based on, for example, the limited screen resolution and real estate provided by a smartphone.

Much of the challenge in building pervasive systems is in bringing together users' expectations of their interactions with the system with the model of a physical and virtual environment with which users interact in the context of the pervasive application.

The aim of this workshop is to bring together researchers in programming languages, software architecture and design, and pervasive systems to present and discuss results and approaches to the development of mobile and pervasive systems. The goal is to begin the process of developing the software design and development tools necessary for the next generation of services in dynamic environments, including mobile and pervasive computing, wireless sensor networks, and adaptive devices.

Submission

Potential workshop participants are asked to submit a paper on topics relevant to programming models for mobile and pervasive systems. We are primarily seeking short position papers (2–4 pages), although full papers that have not been published and are not under consideration elsewhere will also be considered (a maximum of 10 pages). Position papers that lay out some of the challenges to programming mobile and pervasive systems, including past failures, are welcome. Papers longer than 10 pages may be automatically rejected by the chairs or the programme committee. From the submissions, the programme committee will strive to balance participation between academia and industry and across topics. Selected papers will appear on the workshop web site; PMMPS has no formal published proceedings. Authors of selected papers will be invited to submit extended versions for publication in an appropriate journal (under negotiation).

Submissions will be accepted through the workshop web site, http://www.pmmps.org

Important dates

  • Submission: 4 February 2011
  • Notifications: 11 March 2011
  • Camera-ready: 2 May 2011
  • Workshop date: 12 June 2011

Organisers

  • Dominic Duggan, Stevens Institute of Technology NJ
  • Simon Dobson, University of St Andrews UK