Call for panels in integrated network management

We’re looking for expert panels to be run at the IFIP/IEEE International Symposium on Integrated Network Management in Dublin in May 2011.

IFIP/IEEE International Symposium on Integrated Network Management (IM 2011)

Dublin, Ireland, 23-27 May 2011

Call for panels


IM is one of the premier venues in network management, providing a forum for discussing a broad range of issues relating to network structure, services, optimisation, adaptation and management. This year’s symposium has a special emphasis on effective and energy-efficient design, deployment and management of current and future networks and services. We are seeking proposals for expert discussion panels to stimulate and inform the audience as to the current “hot topics” within network management. An ideal panel will bring the audience into a discussion prompted by position statements from the panel members, taking account of both academic and industrial viewpoints. We are interested in panels on all aspects of network management, especially those related to the theme of energy awareness and those discussing the wider aspects of networks and network management. This would include the following topics, amongst others:
  • Multi-transport and multi-media networks
  • Network adaptation and optimisation
  • Co-managing performance and energy
  • The uses and abuses of sensor networks
  • The science of service design and implementation
  • Programming network management strategies
  • Tools and techniques for network management
  • Socio-technical integration of networks
  • Energy-efficiency vs equality of access
  • Network-aware cloud computing
  • The future of autonomic management
  • Coping with data obesity
  • Managing the next-generation internet

How to propose a panel

Please send a brief (1-page) proposal to the panel chairs, Simon Dobson and Gerard Parr. Your proposal should indicate the  relevance of the panel to the broad audience of IM, and include the names of proposed panel members.

Important dates

Submission of panel proposals: 20 October 2010 Notifications of acceptance: mid-November 2010 Conference dates: 23-27 May 2011

Funded PhD studentship in adaptive service ecosystems

We have an opening for someone interested in doing a PhD in adaptive services. University of St Andrews School of Computer Science Funded PhD studentship in adaptive service ecosystems The opportunity We have an opening for a talented and well-motivated individual to investigate mechanisms for the design and implementation of long-lived, responsive and adaptive ecosystems of distributed services. This work will take place in the context of the SAPERE project, funded by the EU Seventh Framework  Programme. SAPERE seeks to develop both the theory and practice of “self-aware” component technology, able to orchestrate and adapt to changing requirements, constraints, conditions and technologies. The University of St Andrews is leading the strand of research into describing and identifying opportunities for adaptation, building on past work in sensor fusion and situation identification. The successful applicant will work closely with the other St Andrews staff (Prof Simon Dobson and Dr Juan Ye), as well as with the other SAPERE consortium partners. An interest in one or more of the fields of adaptive systems, pervasive computing, sensor-driven systems, uncertain reasoning and software engineering would be desirable. The studentship is funded for three years, including a stipend and all tuition fees. Please note that the conditions of funding restrict this opportunity to applicants who are nationals of an EU/EEA country. Background The University of St Andrews is the oldest university in Scotland (founded around 1410) and the third-oldest in the English-speaking world. Situated in a small town on the east coast of Scotland, it has a student population of around 8000 and an active research programme in a wide range of disciplines across the sciences and humanities. It has been consistently ranked in the top-10 universities in the UK by a number of recent league table exercises, and has been described by the Sunday Times as “now firmly established as the leading multi-faculty alternative to Oxford and Cambridge.” The School of Computer Science had 60% of its staff rated at 4* (“world-leading in terms of originality, significance and rigour,” the highest available) or 3* in the most recent UK Research Assessment Exercise. The School’s academic staffing consists of 22 academics and over 25 post-doctoral fellows. There are also around 150 undergraduate full-time equivalent students and 30 PhD students. To stimulate a dynamic high-quality research environment, the School’s research strategy concentrates on three research groups, each incorporating a number of key research topics pursued independently and in collaboration. The three research groups work in the areas of Artificial Intelligence and Symbolic Computation, Networks and Distributed Systems, and Systems  Engineering. Each group attracts a large amount of external funding from both the traditional research grant agencies (EPSRC, EU, etc.) and from  companies such as SUN Microsystems, ICL, Data Connection, Microsoft and BAe. The School is also a leading member of SICSA, a research alliance of leading computer science Schools across Scotland, with members of the School leading a number of core strategic activities. Within the School, the Networks and Distributed Systems group is a world-class research centre, not only for software artefacts such as programming languages, object stores and operating systems but also in adaptive, pervasive and autonomic systems engineering and in the composition technologies used in such large-scale distributed systems such as peer-to-peer overlay mechanisms, distributed mediation, distributed termination and distributed garbage collection. Application To apply, please send a CV and statement of research interests by email to Prof Simon Dobson (, to whom informal inquiries can also be directed. The closing date for applications is 11 October 2010.

Getting rid of the laptop

I’ve been playing with evaluating two new toys important new pieces of technology: an iPad and a Pulse SmartPen. The combination almost makes we ready to ditch my netbook — or at least got me thinking carefully about why I still have one. The iPad is well-known; the SmartPen perhaps not so. It’s made by a company called LiveScribe, and works with special notebook paper. A camera in the nib watches what’s been written and tracks the pen by looking at a background of dots arranged to provide location information about the pen on the page. The pen can also record what’s being said, and cleverly links the two data streams together: later you can tap a word and hear what was being said at the time. I’ve been using one for a fortnight. I used to keep written notebooks, but moved to taking notes purely on my netbook when I realised I was forgetting what I’d written where: a notebook is just a dead tree with dead information on it, and I’ve become used to everything being searchable. However, getting searchability meant converting my note-taking style to linear text rather than mindmaps or sketches, since that’s what the tools typically support. (There are mindmap tools, of course, but they’re completely separate from other note-taking tools and so get in the way somewhat.) There’s also a barrier to note-taking in having to get the netbook out, rather than just picking up a (special) pen and (special) paper. The resulting data is searchable, since the desktop tool does fairly decent) handwriting recognition: I can “tag” pages in the margins, writing slightly more carefully than usual, and search for the tags even if full content searching is a bit aspirational. For what I do this is a lot, but not quite enough, as I spend a lot of time reading, looking up information and writing emails, papers and the like. A Kindle or other e-reader would be great for the reading, but not for the net access. That’s where the iPad comes in: can it replace the need for a more traditional web-access and writing device? It’s certainly a lovely piece of kit, fast and stable, and allows easy browsing. The keyboard is pretty good for a “soft” device, and one could easily see writing email and editing small amounts of text using it. I can also see that it’d be an awesome platform for interactive content and mixed media books/films, assuming the editing tools are available elsewhere. Of course neither netbooks nor iPads are really optimised for the creation of content: they’re very much consumer devices intended for the consumption of content written on other, bigger machines. I don’t think that’s a criticism — no-one does smartphone software development on a smartphone, after all — but it does mean that neither is optimal as a device for someone who creates a lot. But the combination of a digitised paper notebook with an internet-access device is extremely attractive. Both devices are extremely portable and friendly, and link well to the larger “back office” machines I use for “serious” work. I have two worries, one about both devices and one about the iPad alone. The first worry is the almost completely closed nature of the software. The Pulse loads its recorded sound and images into its own desktop tool, which are than only available through that tool despite (I imagine) using standard data formats internally with some clever hyperlinking. The tool does provide important value-add, of course, specifically the links from written text to recorded audio. But that should be separate from the actual content, and it isn’t. One can “archive” a notebook, or turn individual pages into PDF, but not (as far as I can tell) get access to the content programmatically as it stands. That’s simply obstructive on the part of Livescribe — and also a little shortsighted, I think, since their linking technology could clearly be applied to any print-linked-to-sound data if their tool was open and able to access arbitrary content. I think this is a great example of where openness is both friendly to the community and potentially a commercial virtue. (Oh, and the Livescribe desktop only works on Intel Macs: who exactly writes non-universal binaries these days? and why?) The iPad has a similar ecosystem, of course, which is “open” in the sense that anyone can write programs for it but “closed” in the sense that (firstly) access to the App Store is carefully constrained and (secondly) there are features of the platform and operating system that aren’t freely available to everyone. I can understand Apple’s contention that — for phones especially — it’s important to only download apps that won’t brick the device. This doesn’t of course imply that there should be a single gatekeeper as has happened with the App Store: one could provide a default store but allow external ones, as happens with Android Market. A single gatekeeper is basically just a way to extract rents from the software ecosystem. This can stifle both innovation and price, to Apple’s advantage. What worries me more, though, is the extra, non-commercial dimension in terms of content control, which I think is more broadly damaging than just software. I was looking at an app for cocktail recipes (Linda’s a big fan). There are several available, of course, but all come with a rating of 17+ because of their mention of frequent drinking or drug use. There’s a suggestion of the illicit becoming the illegal there. It’s also well-known that Apple enforces a “no porn” rule on the App Store. Whatever one’s attitude to pornography, much of it isn’t illegal and it’s not clear that a software company should restrict the uses of a device above and beyond the law. The whole experience reminds me very strongly of Disneyland: safe, beautiful, welcoming, friendly — and utterly fake, and utterly anodyne. One can choose not to go to Disneyland, of course — and certainly not to live there — but it’s another thing to hand control of access to information and information technology off to a commercial third party. Anything can be disallowed on a whim, or for the greater commercial good — and can of course be disallowed or edited retrospectively. Whether we like it or not, human culture includes material, that is distasteful for many people. That’s why we have critical faculties, and diversity, and laws on free speech. Commercial device providers and operators are not constrained by requirements to fairness in the way that newspapers and public broadcasters are, and could easily be persuaded to silence some forms of speech on the basis of commercial interest regardless of their wider legality. For the present I’ll be keeping the netbook for internet access, but using the SmartPen for note-taking, and thinking a bit more about a dedicated ebook reader. It’s a compromise between openness and convenience that I’m conscious of making, and not without some hesitation. Time will tell how the choices play out and evolve, and maybe I’ll buy an Android tablet when they mature a little :-)

Monads: a language designer’s perspective

Monads are one of the hottest topics in functional programming, and arguably simplify the construction of a whole class of systems. Which makes it surprising that they’re so opaque and hard to understand to people who’s main experience is in imperative or object-oriented languages. There are a lot of explanations of, and tutorials on, monads, but most of them seem to take one of two perspectives: either start with a concrete example, usually in I/O handling, and work back, or start from the abstract mathematical formulation and work forwards. This sounds reasonable, but apparently neither works well in practice — at least, judging from the comments one receives from  intelligent and able programmers who happen not to have an extensive functional programming or abstract mathematical background. Such a core concept shouldn’t be hard to explain, though, so I thought I’d try a different tack: monads from the perspective of language design. In Pascal, C or Java, statements are separated (or terminated) by semicolons. This is usually regarded as a piece of syntax, but let’s look at it slightly differently. Think of the semicolon as an operator that takes two program fragments and combines them together to form a bigger fragment. For example: int x = 4; int y = x * 3; printf("%d", y); We have three program fragments. The semicolon operator at the end of the first line takes the fragment on its left-hand side and combines it with the fragment on its right-hand side. Essentially the semicolon defines how the RHS is affected by the code on the LHS: in this case the RHS code is evaluated in an environment that includes a binding of variable x, effectively resolving what is otherwise a free variable. Similarly, the semicolon at the end of the second line causes the third line to be evaluated in an environment that include y. The meaning of the semicolon is hard-wired into the language (C, in this case) and defines how code fragments are sequenced and their effects propagated. Now from this perspective, a monad is a programmable semicolon. A monad allows the application programmer, rather than the language designer, to determine how a sequence of code is put together, and how one fragment affects those that come later. Let’s revert to Haskell. In a slightly simplified form, a monad is a type class with the following signature: class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b So a monad is a constructed type wrapping-up some underlying element type that defines two functions, return and (>>=). The first function injects an element of the element type into the monadic type. The second takes an element of the monadic type and a function that maps an element that monad’s element type to some other monadic type, and returns an element of this second monadic type. The simplest example of a monad is Haskell’s Maybe type, which represents either a value of some underlying element type or the absence of a value: data Maybe a = Just a | Nothing Maybe is an instance of Monad, simply by virtue of defining the two functions that the type class needs: instance Monad Maybe where return a = Just a Just a >>= f = f a Nothing >>= _ = Nothing return injects an element of a into an element of Maybe a. (>>=) takes an element of Maybe a and a function from a to Maybe b. If the element of Maybe a it’s passed is of the form Just a, it applies the function to the element value a. If, however, the element is Nothing, it returns Nothing without evaluating the function. It’s hard to see what this type has to do with sequencing, but bear with me. Haskell provides a do construction which gives rise to code like the following: do v <- if b == 0 then Nothing else Just (a / b) return 26 / v Intuitively this looks like a sequence of code fragments, so we might infer that the conditional executes first and binds a value to v, and then the next line computes with that value — which is in fact what happens, but with a twist. The way in which the fragments relate is not pre-defined by Haskell. Instead, the relationship between the fragments is determined by the values of which monad the fragments manipulate (usually expressed as which monad the code executes in). The do block is just syntactic sugar for a stylised use of the two monad functions. The example above expands to: (if b == 0 then Nothing else Just (a / b)) >>= (\v -> return (26 / v)) So the do block is syntax that expands into user-defined code, depending on the monad that the expressions within it use. In this case, we execute the first expression and then compose it with the function on the right-hand side of the (>>=) operator. The definition says that, if the left-hand side value is Just a, the result is that we call the RHS passing the element value; if the LHS is Nothing, we return Nothing immediately. The result is that, if any code fragment in the computation returns Nothing, then the entire computation returns Nothing, since all subsequent compositions will immediately short-circuit: the Maybe type acts like a simple exception that escapes from the computation immediately Nothing is encountered. So the monad structure introduces what’s normally regarded as a control construct, entirely within the language. It’s fairly easy to see that we could provide “real” exceptions by hanging an error code off the failure value. It’s also fairly easy to see that the monad sequences the code fragments and aborts when one of the “fails”. In C we can think of the same function being provided by the semicolon “operator”, but with the crucial difference that it is the language, and not the programmer, that decides what happens, one and for all. Monads reify the control of sequencing into the language. To see how this can be made more general, let’s think about another monad: the list type constructor. Again, to make lists into monads we need to define return and (>>=) with appropriate types. The obvious injection is to turn a singleton into a list: instance Monad [] where return a = [a] The definition of (>>=) is slightly more interesting: which function of type [a] -> (a -> [b]) -> [b] is appropriate? One could choose to select an element of the [a] list at random and apply the function to it, giving a list [b] — a sort of non-deterministic application of a function to a set of possible arguments. (Actually this might be interesting in the context of programming with uncertainty, but that’s another topic.) Another definition — and the one that Haskell actually chooses — is to apply the function to all the elements of [a], taking each a to a list [b], and then concatenating the resulting lists together to form one big list: l >>= f = concat (map f l) What happens to the code fragments in a do block now? The monad threads them together using the two basic functions. So if we have code such as: do x <- [1..10] y <- [20..30] return (x, y) What happens? The first and second fragments clearly define lists, but what about the third, which seems to define a pair? To see what happens, we need to consider all the fragments together. Remember, each fragment is combined with the next by applying concat (map f l). If we expand this out, we get: concat (map (\x -> concat (map (\y -> return (x, y)) [20..30])) [1..10]) So to summarise, Haskell provides a do block syntax that expands to a nested sequence of monadic function calls. The actual functions used depend on the monadic type in the do block, so the programmer can define how the code fragments relate to one another. Common monads include some simple types, but also I/O operations and state bindings, allowing Haskell to perform operations that are typically regarded as imperative without losing its laziness. The Haskell tutorial explains the I/O syntax. What can we say about monads from the perspective of language design? Essentially they reify sequencing, in a functional style. They only work as seamlessly as they do because of Haskell’s flexible type system (allowing the definition of new monads), and also because of the do syntax: without the syntactic sugar, most monadic code is incomprehensible. The real power is that they allow some very complex functionality to be abstracted into functions and re-used. Consider the Maybe code we used earlier: without the “escape” provided by the Maybe monad, we’d have to guard each statement with a conditional to make sure there wasn’t a Nothing returned at any point. This quickly gets tiresome and error-prone: the monad encapsulates and enforces the desired behaviour. When you realise that one can also compose monads using monad transformers, layering monadic behaviours on top of each other (albeit with some contortions to keep the type system happy), it becomes clear that this is a very powerful capability. I think one can also easily identify a few drawbacks, though. One that immediately springs to mind is that monads reify one construction, of the many that one might choose. A more general meta-language, like the use of meta-objects protocols or aspects, or structured language and compiler extensions, would allow even more flexibility. A second — perhaps with wider impact — is that one has to be intimately familiar with the monad being used before one has the slightest idea what a piece of code will do. The list example above is not obviously a list comprehension, until one recognises the “idiom” of the list monad. Thirdly, the choice of monadic function definitions isn’t often canonical, so there can be a certain arbitrariness to the behaviour. It’d be interesting to consider generalisations of monads and language constructs to address these issues, but for the meantime one can use them to abstract a whole range of functionality in an interesting way. Good luck!

Contextual processes

Context-aware systems are intended to follow and augment user-led, real-world processes. These differ somewhat from traditional workflow processes, but share some characteristics. Might the techniques used to implement business processes via web service orchestration fit into the context-aware landscape too? These ideas arose as a result of discussions at the PMMPS workshop at PERVASIVE 2010 in Helsinki. In particular, I was thinking about comments Aaron Quigley made in his keynote about the need to build platforms and development environments if we’re to avoid forever building just demos. The separation of function from process seems to be working in the web world: might it work in the pervasive world too? In building a pervasive system we need to integrate several components:

  1. A collection of sensors that allow us to observe users in the real world
  2. A user or situation model describing what the users are supposed to be doing, in terms of the possible observations we might make and inferences we might draw
  3. A context model that brings together sensor observations and situations, allowing us to infer the latter from a sequence of the former
  4. Some behaviour that’s triggered depending on the situation we believe we’re observing
Most current pervasive systems have quite simple versions of all these components. The number of sensors is often small — sometimes only one or two, observing one user. The situation model is more properly an activity model in that it classifies a user’s immediate current activity, independently of any other activity at another time. The context model encapsulates a mapping from sensors to activities, which then manifest themselves in a activating or deactivating a single behaviour. Despite their simplicity, such systems can perform a lot of useful tasks. However, pervasive activities clearly live on a timeline: you leave home and then walk to work and then enter your office and then check your email, and so forth. One can treat these activities as independent, but that might lose continuity of behaviour, when what you want to do depends on the route by which you got to a particular situation. Alternatively we could treat the timeline as a process, and track the user’s progress along it, in the manner of an office workflow. Of course the problem is that users don’t actually follow workflows like this — or, rather, they tend to interleave actions, perform them in odd orders, leave bits out, drop one process and do another before picking-up the first (or not), and so on. So pervasive workflows aren’t at all like “standard” office processes. They aren’t discriminated from other workflows (and non-workflow activities) happening simultaneously in the same space, with the same people and resources involved. In some simple systems the workflow actually is “closed”, for example computer theatre (Pinhanez, Mase and Bobick. Interval scripts: a design paradigm for story-based interactive systems., Proceedings of CHI‘97. 1997.) — but in most cases its “open”. So the question becomes, how do we describe “loose” workflows in which there is a sequence of activities, each one of which reinforces our confidence in later ones, but which contain noise and extraneous activities that interfere with the inferencing? There are several formalisms for describing sequences of activities. The one that underlies Pinhanez’ work mentioned above is Allen algebra (Allen and Ferguson. Actions and events in interval temporal logic. Journal of Logic and Computation 4(5), pp.531—579. 1994.) which provides a notation for specifying how intervals of time relate: an interval a occurs strictly before another b, for example, which in turn contains wholly within it another interval c. It’s easy to see how such a process provides a model for how events from the world should be observed: if we see that b has ended, we can infer that c has ended also because we know that c is contained within b, and so forth. We can do this if we don’t — or can’t — directly observe the end of c. However, this implies that we can specify the relationships between intervals precisely. If we have multiple possible relationships the inferencing power degrades rapidly. Another way to look at things is to consider what “noise” means. In terms of the components we set out earlier, noise is the observation of events that don’t relate to the process we’re trying to observe. Suppose I’m trying to support a “going to work” process. If I’m walking to work and stop at a shop, for example, this doesn’t interfere with my going to work — it’s “noise” in the sense of “something that happened that’s non-contradictory of what we expected to see”. On the other hand if, after leaving the shop, I go home again, that might be considered as “not noise”, in the sense of “something that happened that contradicts the model we have of the process”. As well as events that support a process, we also have events that contradict it, and events that provide no information. Human-centred processes are therefore stochastic, and we need a stochastic process formalism. I’m not aware of any that really fit the bill: process algebras seem too rigid. Markov processes are probably the closest, but they’re really designed to capture frequencies with which paths are taken rather than detours and the like. Moreover we need to enrich the event space so that observations support or refute hypotheses as to which process is being followed and where we are in it. This is rather richer than is normal, where events are purely confirmatory. In essence what we have is process as hypothesis in which we try to confirm that this process is indeed happening, and where we are in it, using the event stream. It’s notable that we can describe a process separately from the probabilities that constrain how it’s likely to evolve, though. That suggests to me that we might need an approach like BPEL, where we separate the description of the process from the actions we take as a result, and also form the ways in which we move the process forward. In other words, we have a description of what it means to go to work, expressed separately from how we confirm that this is what’s being observed in terms of sensors and events, and separated further from what happens as a result of this process being followed. That sounds  a lot easier than it is, because some events are confirmatory and some aren’t. Furthermore we may have several processes that can be supported  by observations up to a point and then diverge: going to work and going shopping are pretty similar until I go into a shop, and/or until I leave the shop and don’t go to work. How do we handle this? We could enforce common-prefix behaviour, but that breaks the separation between process and action. We could insist on “undo” actions for “failed”, no-longer-supported-by-the-observations processes, which severely complicates programming and might lead to interactions between different failed processes. Clearly there’s something missing from our understanding of how to structure more complex, temporally elongated behaviours that’ll need significant work to get right.