Context-aware systems are intended to follow and augment user-led, real-world processes. These differ somewhat from traditional workflow processes, but share some characteristics. Might the techniques used to implement business processes via web service orchestration fit into the context-aware landscape too?
These ideas arose as a result of discussions at the PMMPS workshop at PERVASIVE 2010 in Helsinki. In particular, I was thinking about comments Aaron Quigley made in his keynote about the need to build platforms and development environments if we’re to avoid forever building just demos. The separation of function from process seems to be working in the web world: might it work in the pervasive world too?
In building a pervasive system we need to integrate several components:
- A collection of sensors that allow us to observe users in the real world
- A user or situation model describing what the users are supposed to be doing, in terms of the possible observations we might make and inferences we might draw
- A context model that brings together sensor observations and situations, allowing us to infer the latter from a sequence of the former
- Some behaviour that’s triggered depending on the situation we believe we’re observing
Most current pervasive systems have quite simple versions of all these components. The number of sensors is often small — sometimes only one or two, observing one user. The situation model is more properly an
activity model in that it classifies a user’s immediate current activity, independently of any other activity at another time. The context model encapsulates a mapping from sensors to activities, which then manifest themselves in a activating or deactivating a single behaviour. Despite their simplicity, such systems can perform a lot of useful tasks.
However, pervasive activities clearly live on a timeline: you leave home
and then walk to work
and then enter your office
and then check your email, and so forth. One can treat these activities as independent, but that might lose continuity of behaviour, when what you want to do depends on the route by which you got to a particular situation. Alternatively we could treat the timeline as a process, and track the user’s progress along it, in the manner of an office workflow.
Of course the problem is that users don’t actually follow workflows like this — or, rather, they tend to interleave actions, perform them in odd orders, leave bits out, drop one process and do another before picking-up the first (or not), and so on. So pervasive workflows aren’t at all like “standard” office processes. They aren’t discriminated from
other workflows (and non-workflow activities) happening simultaneously in the same space, with the same people and resources involved. In some simple systems the workflow actually is “closed”, for example computer theatre (Pinhanez, Mase and Bobick. Interval scripts: a design paradigm for story-based interactive systems., Proceedings of
CHI‘97. 1997.) — but in most cases its “open”. So the question becomes, how do we describe “loose” workflows in which there is a sequence of activities, each one of which reinforces our confidence in later ones, but which contain noise and extraneous activities that interfere with the inferencing?
There are several formalisms for describing sequences of activities. The one that underlies Pinhanez’ work mentioned above is Allen algebra (Allen and Ferguson. Actions and events in interval temporal logic. Journal of Logic and Computation
4(5), pp.531—579. 1994.) which provides a notation for specifying how intervals of time relate: an interval
a occurs strictly before another
b, for example, which in turn contains wholly within it another interval
c. It’s easy to see how such a process provides a model for how events from the world
should be observed: if we see that
b has ended, we can infer that
c has ended also because we know that
c is contained within
b, and so forth. We can do this if we don’t — or can’t — directly observe the end of
c. However, this implies that we can specify the relationships between intervals precisely. If we have multiple possible relationships the inferencing power degrades rapidly.
Another way to look at things is to consider what “noise” means. In terms of the components we set out earlier, noise is the observation of events that don’t relate to the process we’re trying to observe. Suppose I’m trying to support a “going to work” process. If I’m walking to work and stop at a shop, for example, this doesn’t interfere with my going to work — it’s “noise” in the sense of “something that happened that’s non-contradictory of what we expected to see”. On the other hand if, after leaving the shop, I go home again, that might be considered as “not noise”, in the sense of “something that happened that contradicts the model we have of the process”.
As well as events that support a process, we also have events that contradict it, and events that provide no information.
Human-centred processes are therefore stochastic, and we need a stochastic process formalism. I’m not aware of any that really fit the bill: process algebras seem too rigid. Markov processes are probably the closest, but they’re really designed to capture frequencies with which paths are taken rather than detours and the like. Moreover we need to enrich the event space so that observations support or refute hypotheses as to which process is being followed and where we are in it. This is rather richer than is normal, where events are purely confirmatory. In essence what we have is
process as hypothesis in which we try to confirm that this process is indeed happening, and where we are in it, using the event stream.
It’s notable that we can describe a process separately from the probabilities that constrain how it’s likely to evolve, though. That suggests to me that we might need an approach like
BPEL, where we separate the description of the process from the actions we take as a result, and also form the ways in which we move the process forward. In other words, we have a description of
what it means to go to work, expressed separately from
how we confirm that this is what’s being observed in terms of sensors and events, and separated further from
what happens as a result of this process being followed. That sounds a lot easier than it is, because some events are confirmatory and some aren’t. Furthermore we may have several processes that can be supported by observations up to a point and then diverge: going to work and going shopping are pretty similar until I go into a shop, and/or until I leave the shop and don’t go to work. How do we handle this? We could enforce common-prefix behaviour, but that breaks the separation between process and action. We could insist on “undo” actions for “failed”, no-longer-supported-by-the-observations processes, which severely complicates programming and might lead to interactions between different failed processes. Clearly there’s something missing from our understanding of how to structure more complex, temporally elongated behaviours that’ll need significant work to get right.