Skip to main content

Posts about book (old posts, page 1)

Pervasive healthcare

This week's piece of shameless self-promotion: a book chapter on how pervasive computing and social media can change long-term healthcare.

My colleague Aaron Quigley and I were asked to contribute a chapter to a book put together by Jeremy Pitt as part of PerAda, the EU network of excellence in pervasive systems. We were asked to think about how pervasive computing and social media could change healthcare. This is something quite close to both our hearts -- Aaron perhaps more so than me -- as it's one of the most dramatic examples of how pervasive computing can really make an impact on society.

There are plenty of examples of projects that attempt to provide high-tech solutions to the issues of independent living-- some of which we've been closely involved with. For this work, though, we suggest that one of the most cost-effective contributions that technology can make might actually be centred around social media. Isolation really is a killer, in a literal sense. A lot of research has indicated that social isolation is a massive risk factor in both physiological and psychological illnesses, and this is something that's likely to get worse as populations age.

Social media can help address this, especially in an age when older people have circles of older friends, and where these friends and family can be far more geographically dispersed than in former times. This isn't to suggest that Twitter and Facebook are the cures of any social ills, but rather that the services they might evolve into could be of enormous utility for older people. Not only do they provide traffic between people, they can be mined to determine whether users' activities are changing over time, identify situations that can be supported, and so provide unintrusive medical feedback -- as well as opening-up massive issues of privacy and surveillance. While today's older generation are perhaps not fully engaged with social media, future generations undoubtedly will be, and it's something to be encouraged.

Other authors -- some of them leading names in the various aspects of pervasive systems -- have contributed chapters about implicit interaction, privacy, trust, brain interfaces, power management, sustainability, and a range of other topics in accessible form.

The book has a web site (of course), and is available for pre-order on Amazon. Thanks to Jeremy for putting this together: it's been a great opportunity to think more broadly than we often get to do as research scientists, and see how our work might help make the world more liveable-in.

It's hard being a man in Middle Earth

The Lord of the Rings is about men, their deeds and courage in the face of seemingly overwhelming odds. Which makes it strange that they in the main get an incredibly raw deal.

I have to say I absolutely love the books: The Hobbit, The Lord of the Rings (LOTR), The Silmarilion, The Unfinished Tales, and the other works in the same area. The films were cinematographic masterpieces, although not sufficiently true to the original for my tastes. The problem I have is one of motivation for the men involved.

If you're a man in Middle Earth, you're pretty much guaranteed to be inferior in some significant way to any other intelligent creature you come across. You can't be wise, in any absolute sense: the elves and the wizards live essentially forever, and they have wisdom sewn-up. You can't be strong, because dwarves and elves (again) are stronger and have more stamina. You can't be artistic, because elves (yet again) are the undisputed masters of all the high arts, and will condescend to teach you what crumbs of their learning you can grasp during your incredibly short lifespan. You can't even be relaxed and laid-back, because hobbits will always make you look stressed-out. Evil's not an option, as Sauron is the ultimate baddie and has legions of orcs (who are corrupted elves in the books, and so have many of the advantages of elves without the goodness).

If you're a man, all you can do is die gloriously.

LOTR is essentially an epic poem in the mould of the Iliad. At the risk of gross over-simplification, epic poems only have three sorts of characters:

  • Heros, who do all the things worth reporting
  • Love interest, who are pursued (and generally won) by heros
  • Arrow-fodder, who are killed by heros
In the Iliad, Achilles, Agamemnon, Paris, Hector and a few others fall into the first category; Helen into the second; and everyone else into the third. (Menelaus is a kind of not-quite-major character who's not completely insignificant, but he's there to provide an excuse for the war in the first place.) Arrow-fodder comprise all the minor characters: they can be brave or dastardly, but typically only occupy the very edges of the story and won't have their characters developed any more than is necessary to set off the heros and make it clear why they have to be killed in their particular way.

Compare this with LOTR. The major characters are obviously Frodo, Gandalf, Aragorn and the rest of the fellowship, plus Saruman. There's Arwen, who (in the book at least, although not the films) is just love interest, and Eowyn. And then there are the minor characters: hordes of orcs to slaughter, Men of Rohan and Gondor — and that's about it.

There are a small number of less-than-major characters: Denethor, Faramir (who's essentially just another love interest to get Aragorn off the hook with Eowyn), Treebeard. And there's another population of less-than-major characters who are major elsewhere: Bilbo, Tom Bombadil, Galadriel, Elrond. All these are bit-players in LOTR but have a major part in either The Silmarillion or one of the other tales. (Elrond is something of an exception, in that his main claim to fame is having been around at lots of significant events. He just never gets a chance to be a protagonist.)

But the thing with being a man in Middle Earth is the way in which your actions are so circumscribed. No man in LOTR actually dies anything other the gloriously. Even Boromir is redeemed through his death. Even the evil men are happy to go down fighting. It's as though men's sole virtue is to have a bad time and then die.

Aragorn escapes this fate, and is the only heroic (in the epic sense) man in the story, but even he is doomed to failure by his mortality, and you can't escape the suspicion that Gondor will collapse as soon as he's dead. From the elves' perspective, destroying Sauron is an absolute good, and they can then all leave for the lands over the Sea which are a remnant of earlier, better times; from men's perspective, it'll probably be a Pyrrhic victory at best.

In many ways LOTR is a story about passing. The elves' time in Middle Earth is past — although I think the films grotesquely overstate their predicament compared to the book — and men can't ever build anything permanent because they just don't have the wisdom/lifespan/art/strength/goodness to cut it. That of course is what sets LOTR apart from "happily ever after" fantasy (like David Eddings' works). By the end of the book, though, everything looks drab, and without Sauron there won't even be any worthwhile evil left. You have to wonder what will men find to motivate themselves, when the best is all in the past and even glorious and worthy death is denied them. One can imagine a lot of drunken fireside reminiscence going on.

If that weren't bad enough, anyone who's read The Silmarillion knows that Middle Earth has been passing for a very long time. Even though in LOTR Sauron is the ultimate in badness, in the greater scheme of things he was only a servant of Morgoth, the really, really ultimate baddie. Even an evil that threatens to engulf the entire world in the Third Age is only a shadow of what evil used to be like in the First Age — modern evil just can't cut it.

It seems a pretty depressing view of history and heroism, but maybe that's the point: that the Fourth Age will be a post-heroic let-down with everyone left dissatisfied and wishing for the days when there were orcs and dragons and Black Riders and magic.

The only computer science book worth reading twice?

I was talking to one of my students earlier, and lent him a book to read over summer. It was only after he'd left that I realised  that -- for me at any rate -- the book I'd given him is probably the most seminal work in the whole of computer science, and certainly the book that's most influenced my career and research interests.

So what's the book? Structure and interpretation of computer programs by Hal Abelson and Jerry Sussman (MIT Press. 1984. ISBN 0-262-01077-1), also known as SICP. The book's still in print, but -- even better -- is available online in its entirety.

OK, everyone has their favourite book: why's this one so special to me? The first reason is the time I first encountered it: in Newcastle upon Tyne in the second year of my first degree. I was still finding my way in computer science, and this book was a recommended text after you'd finished the first programming courses. It's the book that introduced me to programming as it could be (rather than programming as it was, in Pascal at the time). What I mean by that is that SICP starts out by introducing the elements of programming -- values, names, binding, control and so on -- and then runs with them to explore a quite dazzling breadth of issues including:

  • lambda-abstraction and higher-order computation
  • complex data structures, including structures with embedded computational content
  • modularity and mutability
  • streams
  • lazy evaluation
  • interpreter and compiler construction
  • storage management, garbage collection and virtual memory
  • machine code
  • domain-specific languages
...and so forth. The list of concepts is bewildering, and only stays coherent because the authors are skilled writers devoted to their craft. But it's also a remarkable achievement to handle all these concepts within a single language framework -- Scheme -- in such a way that each builds on what's gone before.

The second reason is the way in which Hal and Jerry view everything as an exercise in language design:

We have also obtained a glimpse of another crucial idea about languages and program design. This is the approach of stratified design, the notion that a complex system should be structured as a sequence of levels that are described using a sequence of languages. Each level is constructed by combining parts that are regarded as primitive at that level, and the parts constructed at each level are used as primitives at the next level. The language used at each level of a stratified design has primitives, means of combination, and means of abstraction appropriate to that level of detail.

Layered abstraction of course is second nature to all computer scientists. What's novel in this view is that each level should be programmable: that the layers are all about computation and transformation, and not simply about hiding information. We don't see that in the mainstream of programming languages, because layering doesn't extend the language at all: Java is Java from top to bottom, with class and libraries but no new control structures. If a particular domain has concepts that would benefit from dedicated language constructs, that's just tough. Conversely (and this is something that very much interests me) if there are constructs it'd be desirable not to have in some domain, they can't be removed. (Within the language, anyway: Java-ME dumps some capabilities in the interests of running on small devices, but that's not something you can do without re-writing the compiler.)

The third influential feature is the clear-sighted view of what computer science is actually about:

The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology -- the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of "what is." Computation provides a framework for dealing precisely with notions of "how to."

I've taken a view before about computers being the new microscopes, opening-up new science on their own as well as facilitating existing approaches. The "how to" aspect of computer science re-appears everywhere in this: in describing the behaviours of sensor networks that can adapt while continuing the reflect the phenomena they've been deployed to sense; in the interpretation of large-scale data mined and mashed-up across the web; in capturing scientific methods and processes for automation; and so forth. The richness of these domains mitigates against packaged software and encourages integration through programming languages like R, so that the interfaces and structures remain "soft" and open to experimentation.

When I looked at my copy, the date I'd written on the inside was September 1988. So a book I bought nearly 22 years ago is still relevant. In fact, I'd go further and say that it's the only computer science book of that age that I'd happily and usefully read again without it being just for historical interest: the content has barely aged at all. That's not all that unusual for mathematics books, but it's almost unheard of in computer science, where the ideas move so quickly and where much of what's written about is ephemeral rather than foundational. It goes to show how well SICP nailed the core concepts. In this sense, it's certainly one of the very few  books on computer science that it's worth reading twice (or more). SICP is to computer science what Feynman's Lectures on Physics are to physics: an accessible distillation of the essence of the subject that's stood the test of time.