Where do all the comments go?

I don't get many comments on this blog: fortunately I do it for the opportunity to write amid the torrent of administrivia rather than for the affirmation. But I do get more than's readily apparent, which tells us something about social media and opens up an opportunity.

Blogging is up there with Twitter and Facebook as the epitome of social media: a way to share parts of your life and mind digitally with more-or-less unselected parts of the world. You could argue that it's also the epitome of egotism to think that anyone else might want to share those parts of your life and mind that you foist upon them, but that's another story.

The social media space has consolidated somewhat to yield several large services with a surrounding cloud of also-rans, which is pretty much what you expect in a market that places a premium on networking and connectedness: by definition one large network is more valuable (in many senses) than several smaller ones. But it shows no signs of consolidating into a single hegemony that captures all the aspects of social media we've come to find: Twitter and Facebook are distinct but linked, and likely to remain that way. The same is true for blogging platforms. The reasons for this are complicated but relate to the ability of one company to climb several steep learning curves simultaneously. It's more efficient to focus on innovating in one area -- climbing the single curve quickly -- rather than trying to provide a unified experience that involves multiple areas of innovation. This is Adam Smith's pin factory for the knowledge economy.

This economically attractive choice has consequences, though. Since there isn't a single network, one needs to access multiple networks to reach the widest community. Many people -- including me -- do this by tying their services together. My Facebook status is updated from my Twitter feed, for example: I almost never update it directly. The same is true for my LinkedIn status. My web site has the Twitter stream and the blog side by side, and if I post to the blog a notification goes out of on Twitter. Actually, if you're reading this at all, you probably know at least some of this by virtue of how you got here :-).

This is the socialisation of social media, in effect, but it means that what one thinks of as a single conversation is actually going on in several places simultaneously, and that's where we need to ask where all the comments go.

I get very few direct comments on the blog, despite some posts getting quite a lot of readers: the conversion rate of readers to commenters is significantly less than 1 comment per 500 distinct page views. However, that's not the whole story, because I also get comments by the other social media channels: as comments on my Facebook page, as replies and retweets of the announcements, as email from people who see these, and so on. That makes it hard to measure the impact of a post as measured by the comments it generates. The popularity of a post isn't what's important to me (although they're always nice to get, of course), but it does mean that a significant benefit of social media -- the ability to engage in a conversation over people's opinions -- is impeded by the fractured nature of the comment stream, even when it's being aided by wider distribution.

So I think there's an opportunity here for someone: how can we aggregate comments? We aggregate blogs themselves using RSS feeds: can we do the same for the comments, and feed them back to a single point (the blog itself, perhaps) so that they're available for further comment regardless of the channel on which they were issued? It's not a trivial task, because you need to identify which comments relate to which blog posting, but it's certainly not insurmountable. Might make a good final-year undergraduate computer science project.

Actually this sort of technology is useful more broadly. Any web site or book that attracts comment is likely to attract them in different fora: how do you find them? How do you bring them to one place? This has both intellectual and commercial implications: intellectually it increases the network of the social media comment space and so should make it more efficient; commercially it improves the means for assessing the popularity of a web site which is often what drives advertising revenues. Overall it would let the increasing specialisation of social media continue without this irretrievably fragmenting the very things that make it most valuable: the two-way flow of ideas.

Who invented meringue?

What the invention of complicated foods tells us about discovery, innovation, and university research funding.

Over lunch earlier this week we got talking about how different foods get discovered -- or invented, whichever's the most appropriate model. The point of the discussion was how unlikely a lot of foods are to have actually been created in the first place.

The lineage of some quite complicated foods is fairly easy to discern, of course. Bread: leave out some wet flour overnight and watch it rise to form sourdough. Do the same for malt and you get beer (actually the kind that of beer that in Flanders is called lambic). Put milk into a barrel, load it onto the back of a donkey and transport it to the next town, and you'll have naturally-churned butter. It's fairly easy to see how someone with an interest in food would refine the technique and diversify it, once they knew that the basic operation worked in some way and to some degree.

But for other foods, it's exactly this initial step that's so problematic.

I think the best example is meringue. Consider the steps you need to go through to discover that meringues exist. First, you have to separate an egg -- which is obvious now, but not so obvious if you don't know that there's a point to it. Then you need to beat the white for a long time, in just the right way to introduce air into it. If you get this wrong, or don't do it for long enough, or do it too enthusiastically (or not enthusiastically enough) you just get slightly whiter egg white: it's only if you do it properly that you get the phase change you need. Of course you're probably doing this with a wholly inappropriate instrument -- like a spoon -- rather than a fork or a balloon whisk (which you don't have, because nobody knows there are things that need air beating into them yet). Then you need to determine, counter-intuitively, that making the egg white heavier (with sugar) will improve the final result when cooked. Then you have to work out that cooking this liquid -- which has actually to be a process of drying, not cooking -- is actually quite a good idea despite appearances.

It's hard enough to make a decent meringue now we know they exist: I find it hard to imagine how one would do it if one didn't even know they existed, and furthermore didn't know that beating egg whites in a particular way will generate the phase change from liquid to foam. (Or even know that there are things called "phase changes" at all for that matter.)

Thinking a little harder, I actually can imagine how meringues got invented. In the Middle Ages a lot of very rich aristocrats competed with their peers either by knocking each other off horses at a joust or by exhibiting ever-more-complex dishes at feasts. These dishes -- called subtleties -- were intended to demonstrate the artistry of the chef and hence the wealth and taste of his patron, the aristocrat. Pies filled with birds, exact scale models of castles, working water-wheels made out of pastry, that kind of thing. In order to do this sort of thing you need both a high degree of cooking skill and a lot of unusual food-based materials to work in. You can find these as part of your normal cooking, but it's probably also worth some experimentation to find new and unusual effects that will advance this calorific arms race a little in your favour.

So maybe meringue was invented by some medieval cook just doing random things with foodstuffs to see what happens. The time spent on things that don't work -- leaving pork fat outside to see if it ferments into vodka, perhaps? -- will be amortised out by the discovery of something that's really useful in making really state-of-the-art food. Contrary to popular belief the Middle Ages was a time of enormous technological advance, and it's easy to think of this happening in food too.

So food evolves under the combined effects of random chance operations shaped by survival pressures. Which is exactly what happens in biology. A new combination gets tried by chance, without any anticipation of any particular result, and the combinations that happen to lead to decent outcomes get maintained. At that point the biological analogy breaks down somewhat, because the decent outcomes are then subjected to teleological refinement by intelligent beings -- cooks -- with a goal in mind. It's no longer random. But the initial undirected exploration is absolutely essential to the process of discovery.

Bizarrely enough, this tells us something more general about the processes of discovery and innovation. They can't be goal-directed: or, more precisely, they can't be goal-directed until we've established that there's a nugget of promise in a particular technique, and that initial discovery will only be performed because of someone's curiosity and desire to solve a larger problem. "Blue-skies" research is the starting point, and you by definition can't know -- or ever expect to know -- what benefits it might confer. You have to kiss an awful lot of frogs to have a reasonable expectation of finding a prince, and blue-skies, curiosity-driven research is the process of identifying these proto-princes amongst the horde of equally unattractive alternatives. But someone's got to do it.

Sensor and sense-ability

I'm talking at the BCS Edinburgh branch tonight about sensing and sensor-driven systems.

It has been an old maxim in computing that incorrect inputs can acceptably give rise to unacceptable outputs: "garbage in, garbage out".

This is ceasing to be true, and many classes of systems must behave predictably even in the face of inputs containing substantial garbage -- although researchers delicately use terms like "imprecise" or "unstructured" instead of "garbage".

In this talk we discuss some approaches to managing the problem of imprecise, inaccurate, untimely and partial inputs in the context of pervasive and sensor-driven systems, and suggest that we need to re-think radically the way we build software and represent decision-making in these environments.

Details of the venue are available here.

Book: Tumbledown -- the old houses of Co Sligo

The west of Ireland is full of old, decrepit buildings that seem to have been left behind by the Celtic Tiger years -- and just crying out to be photographed.

Over summer 2010 I spent a happy few days touring around Sligo and nearby counties taking shots of the half-hidden, half-derelict houses hidden in the undergrowth. Then I discovered on-demand book publishing, so now the collection is a book.

The publishing service is called Blurb, and lets you download a desktop-publishing program that can be used to generate books from a range of different templates, including some that are designed for art photography. The quality of the printing is excellent -- far better than results I've had from other similar services. What's even better is that Blurb hosts a bookshop from which anyone else can buy a copy of your book, so if you'd like a copy just follow the link above.

I have a feeling this sort of vanity publishing could become quite addictive. I've thought of half a dozen other photo projects that'd make decent books -- undoubtedly more popular than any of my technical writing, anyway. For art books it's clearly much more attractive than an ebook; and while it's more expensive per book than traditional publishing, the huge advantage is that you don't have to pay for, stack up and distribute inventory beforehand.  It certainly reduces the barriers to entry for stroking your own ego, anyway :-)

Call for papers: Formal methods for Pervasive Systems

We are looking for papers on the application of formal methods to pervasive computing, for a workshop at FM'11. (Full disclosure: I'm the keynote speaker.)

Formal Methods for Pervasive Systems [Pervasive@FM2011]

A workshop held as part of FORMAL METHODS 2011

DEADLINE: 20th March 2011


Logics, Process calculi, Automata, Specification languages, Probabilistic analysis, Model checking, Theorem-proving, Tools, Automated deduction


privacy, behaviour, security, reliability, interoperability, context-aware, mobility, resource requirements, temporal


pervasive healthcare systems, sensor networks, e-commerce, cloud computing, MANETs/VANETs, telephony, device swarms, electronic tags, human-device interaction, etc.


Our aim is to have productive discussions and a true workshop "feel". Thus, we invite two kinds of submission:

  1. Original research papers concerning any of the above topics; or
  2. Survey papers providing an overview of some of the above topics.

Submissions should be written in English, formatted according Springer LNCS style, and not exceed 20 pages in length. Submissions must be made via Easychair.

Our aim is for an informal proceedings based on these submissions to be available during the workshop. Depending upon the success of the workshop, we intend to produce an edited book based (at least in part) upon the contributions or develop a special issue of a journal.


Submission deadline:          20th March 2011 Notification of acceptance:    1st May 2011 Pre-proceedings version due:  20th May 2011 Workshop:                     20th or 21st June, 2011


Simon Dobson (School of Computer Science, University of St. Andrews)


Michael Fisher    (University of Liverpool, UK) Brian Logan       (University of Nottingham, UK)


Natasha Alechina    (Nottingham, UK) Myrto Arapinis      (Birmingham, UK) Mohamed Bakhouya    (Belfort, FR) Doina Bucur         (INCAS3, NL) Michael Butler      (Southampton, UK) Muffy Calder        (Glasgow, UK) Antonio Coronato    (CNR, IT) Soren Debois        (Copenhagen, DK) Giuseppe De Pietro  (CNR, IT) Marina De Vos       (Bath, UK) Simon Dobson        (St Andrews, UK) Michael Fisher      (Liverpool, UK) Michael Harrison    (Newcastle, UK) Savas Konur         (Liverpool, UK) Brian Logan         (Nottingham, UK) Alessio Lomuscio    (Imperial, UK) Ka Lok Man          (XJTLU, CN) Julian Padget       (Bath, UK) Anand Ranganathan   (IBM, USA) Alessandro Russo    (Imperial, UK) Mark Ryan           (Birmingham, UK) Chris Unsworth      (Glasgow, UK) Kaiyu Wan           (XJTLU, CN)


Natasha Alechina    (University of Nottingham, UK) Muffy Calder        (University of Glasgow, UK) Michael Fisher      (University of Liverpool, UK) Brian Logan         (University of Nottingham, UK) Mark Ryan           (University of Birmingham, UK)

The (new) idea of a (21st century) university

What should the university of the 21st century look like? What are we preparing our students for, and how? And how should we decide what is the appropriate vision for modern universities?

There's a tendency to think of universities as static organisations whose actions and traditions remain fixed -- and looking at some ceremonial events it's easy to see where that idea might come from. Looking in from the outside one might imagine that some of the teaching of "old material" (such as my teaching various kinds of sorting algorithms) is just a lack of agility in responding to the real world: who needs to know? Why not just focus on the modern stuff?

This view is largely mistaken. The point of teaching a core of material is to show how subjects evolve, and to get students used to thinking in terms of the core concepts rather than in terms of ephemera that'll soon pass on. Modern stuff doesn't stay modern, and that is a feature of the study of history or geography as much as of computer science or physics. Universities by their nature have a ringside view of the changes that will affect the world in the future, and also contain more than their fair share of maverick "young guns" who want to mix things up. It's natural for academics to be thinking about what future work and social spaces will be like, and to reflect on how best to tweak the student experience with these changes in mind.

What brought this to my mind is Prof Colm Kenny's analysis piece in the Irish Independent this weekend, a response to the recently-published Hunt report ("National strategy for higher education: draft report of the strategy group." 9 August 2010) that tries to set out a vision for Irish 3rd-level (undergraduate degree) education. Although specific to Ireland, the report raises questions for other countries' relationships with their universities too, and so is worth considering broadly.

A casual read of even the executive summary reveals a managerial tone. There's a lot of talk of productivity, broadening access, and governance that ensures that institutions meet performance targets aligned with national priorities. There's very little on encouraging free inquiry, fostering creativity, or equipping students for the 21st century. The report -- and similar noises that have emerged from other quarters, in Ireland and the UK -- feel very ... well ... 20th century.

Life and higher education used to be very easy: you learned your trade, either as an apprentice or at university; you spent forty years practising it, using essentially those techniques you'd been imparted with plus minor modifications; you retired, had a few years off, and then died. But that's past life: future, and indeed current, life aren't going to be like that. For a start, it's not clear when if ever we'll actually get to retire. Most people won't stay in the same job for their entire careers: indeed, a large percentage of jobs that one could do at the start of a career won't even exist forty years later, just as many of those jobs haven't been thought of now. When I did my PhD 20 years ago there was no such thing as a web designer, and music videos were huge projects that no-one without access to a fully-equipped multi-million-pound studio could take on. Many people change career because they want to rather than through the influence of outside forces, such as leaving healthcare to take up professional photography.

What implications does this have for higher education? Kenny rightly points out that, while distance education and on-line courses are important, they're examples of mechanism, not of vision. What they have in common, and what drives their attractiveness, is that they lower the barriers to participation in learning. They actually do this in several ways. They allow people to take programmes without re-locating and potentially concurrently with their existing lives and jobs. They also potentially allow people to "dip-in" to programmes rather than take them to their full extent, to mash-up elements from different areas, institutions and providers, and to democratise the generation and consumption of learning materials.

Some students, on first coming to university, are culture-shocked by the sudden freedom they encounter. It can take time to work out that universities aren't schools, and academics aren't teachers. In fact they're dual concepts: a school is an institution of teaching, where knowledge is pushed at students in a structured manner; a university is an institution of learning, which exists to help students to find and interact with knowledge. The latter requires one to learn skills that aren't all that important in the former.

The new world of education will require a further set of skills. Lifelong learning is now a reality as people re-train as a matter of course. Even if they stay in the same career, the elements, techniques and technologies applied will change constantly. It's this fact of constant change and constant learning that's core to the skills people will need in the future.

(Ten years or so ago, an eminent though still only middle-aged academic came up to me in the senior common room of the university I taught in at the time and asked me when this "internet thing" was going to finish, so that we could start to understand what it had meant. I tried to explain that the past ten years were only the start of the prologue to what the internet would do to the world, but I remember his acute discomfort at the idea that things would never settle down.)

How does one prepare someone for lifelong learning? Actually many of the skills needed are already being acquired by people who engage with the web intensively. Anyone who reads a wide variety of material needs to be able to sift the wheat from the chaff, to recognise hidden agendas and be conscious of the context in which material is being presented. Similarly, people wanting to learn a new field need to be able to determine what they need to learn, to place it in a sensible order, locate it, and have the self-discipline to be able to stick through the necessary background.

It's probably true, though, that most people can't be successful autodidacts. There's a tendency to skip the hard parts, or the background material that (however essential) might be perceived as old and unnecessary. Universities can provide the road maps to avoid this: the curricula for programmes, the skills training, the support, examination, quality assurance and access to the world's foremost experts in the fields, while being only one possible provider of the material being explored. In other words, they can separate the learning material from the learning process -- two aspects that are currently conflated.

I disagree with Colm Kenny on one point. He believes that only government can provide the necessary vision for the future of higher education. I don't think that's necessary at all. A system of autonomous universities can set their own visions of the future, and can design processes, execute them, assess them, measure their success and refine their offerings -- all without centralised direction. I would actually go further, and argue that the time spent planning a centralised national strategy would be better spent decentralising control of the university system and fostering a more experimental approach to learning. That's what the world's like now, and academia's no different.

Why I don’t sign NDAs

Non-disclosure agreements (NDAs) are something I try not to sign, for various reasons. For one thing, they give a false sense of security; for another, they interfere with me doing my job.

An increasing amount of academic research is supported in one way or another by companies, either directly or through co-funding agreements. This trend is only likely to increase as State funding becomes less common. As well as this, academics are sometimes approached to do consultancy or development work for companies on a more close-to-market basis. This can be great for all concerned: the companies get access to (hopefully) unbiased expert advice that can perhaps take a longer-term view, while the academics get real-world problems to work on and a reality check on some of their ideas.

Despite all this, there are still some problems. Chief among them, I've found, are non-disclosure agreements (NDAs) whereby one or both sides agree not to disclose proprietary information to third parties. Some NDAs are one-way, so (for example) the university agrees not to disclose company information; many are symmetrical and protect both sides, which is obviously more desirable. (Interestingly it's often university-led NDAs that are asymmetric, implying that the companies have no intellectual input...) Although they sound like they're used for competitive reasons -- and sometimes they are -- it's more likely that they used to protect information for use in later patent applications, where discussion with a third party might be regarded as "publication" and so endanger the patent. Anyone who works in commercial research has to be sensitive to this, and since I used to run a company myself I'm conscious of protecting others' intellectual property.

So why does this cause a problem? Mainly because of the bluntness of the instrument by which the protection happens.

In my job, I have a lot of discussions that involve half-formed ideas. Many of these seem pretty much to condense out of the ether into several people's heads simultaneously: it's in no way uncommon to have two or three people discuss the same "novel" idea within days of each other. I suppose it's just that there's nothing really new under the sun, and people who share a common technical milieu will often see the same problems and arrive at similar solutions. Often the people involved are students: either undergraduates looking for projects, or PhD students with ideas for theses or papers. These people are my "core constituency" in the sense that my main job is to teach and supervise them in a research-led way.

You can probably see where this is going. Suppose a student comes to me with an idea, and that this idea is related in some way to an idea presented by a company with whom I've signed an NDA. What do I do? Refuse to discuss the matter with the student, even though they're fired-up about it? Try to re-focus them onto another area, even though it might be a great idea, because I can't discuss it? Send them to someone else, even though I might be the right person to supervise the work?

What I can't do is get involved, because however hard I try, I'll never be able to prove that information covered by the NDA had no effect on what I said or did -- or didn't say or do, for that matter. That leaves both me and the university open to legal action, especially if by some chance the student's work got a high profile and damaged the company, for example by developing an open-source solution to something they were working on.

This is something of a dilemma. I like working with companies; I love working with students; and I don't like the feeling that my freedom to discuss technology and ideas is being legally constrained.

I therefore minimise my exposure to NDAs and confidentiality agreements. It's sometimes unavoidable, for example as part of EU project consortium agreements. But as a general rule I don't think NDAs sit well with academics, and there's too much danger of damaging the general openness of research within a university: too much of a sacrifice just to get a single funded project. I'll happily agree to keep information confidential, but the risks of signing a blunt and broad agreement to that effect are just too great.

Modern postcodes

Ireland doesn't have a postcode system -- a state of affairs that causes endless problems with badly-designed web sites that expect them, as well as with courier deliveries. But of course in the internet age there's no reason to wait for the State to act...

It always surprises people that Ireland doesn't have post codes, when pretty much everywhere else does. Dublin has postal districts -- I used to work in Dublin 4, which covers about 50 square kilometres and so doesn't really function as an aid to delivery or navigation -- but there's nothing similar in the rest of the country. Add to this the fact that many country villages don't have street names either, and you start to understand why getting a package delivered by a courier usually involves a phone call to talk them along the route.

In actual fact the problem is less severe than you might expect, because the postal system works rather well. This is because the postmen and women get very good at learning where each person lives by name, so a name, village and county will normally get through (outside a major town). The villages are small and people tend not to move too frequently, so human-based routing works well for frequent contacts. For couriers and infrequent service providers, though, it's a different story. I usually have to take phone calls from the people who deliver heating oil, for example, because a twice-a-year delivery isn't enough for them to remember where we live.

One might think that this situation would be easily remedied: choose a postcode system and implement it. But this leads to the next thing that people often find surprising: many people in the countryside, despite the obvious benefits in terms of convenience and efficiency, are implacably opposed to postcodes. The fear is that such a system would do away with the quaint townland names: each village will often have several smaller townlands surrounding it, that get mentioned in the postal addresses. These often have a lot of history attached to them, and in some parts of the country are written in Irish even when nothing else is. Every time the idea of national postcodes is raised in the national press a whole host of letters are published opposing it and predicting the death of the rural Irish lifestyle, and it seems that this has been enough to stymie the implementation of the system on a national basis. There are a disproportionate number of Irish parliamentary seats in rural areas, so political parties are loath to do anything that alienates the rural vote.

In past times, that would have been it: the State doesn't act, end of story. But we're now in the internet age, and one doesn't have to wait for the State.

I just came across Loc8, a company that has established an all-Ireland post code system. They've devised a system of short codes that can be used to locate houses to within about 6m. So -- to take an example directly from the web site -- the Burlington Hotel in Dublin has Loc8 code NN5-39-YD7. The codes are structured according to the expected hierarchy of a zone, a locality and then a specific location, with the latter (as far as I can tell) being sparse and so not predictable from neighbouring codes: you can't derive the code of one property by knowing one nearly. (I might be wrong about that, though.)

So far so good. If the service is useful, people will use it: and apparently they are. You can enter a Loc8 code into some GPS systems already, which is a major step forward. The courier companies -- and even, apparently, the national postal service, An Post -- will apparently take Loc8 codes too. There's also a plug-in for Firefox that will lookup a Loc8 code from the context menu: try it on the code above. It's a bit clunky -- why does it need to pop-up a confirmation dialogue? -- and integration with something like Hyperwords would make it even more useable, but it's a start.

What I like about Loc8 is that it's free and (fairly) open: you can look a code up on their web site and it'll be displayed using Google Maps. The integration with commercial GPS systems is a great move: I don't know if it's integrated with the Google navigation on Android, but if it isn't it'd be easy enough for LOC8 to do -- or indeed for anyone else, and that's the great bonus over (for example) the closed pay-world of UK postcode geolocation.

The real story here is that it's possible to mash-up a national-scale system using extremely simple web tools, make it available over the web -- and then, if there's value enough, cut commercial deals with other providers to exploit the added value. That sort of national reach is a real novelty, and something we're bound to see more of: I'd like something similar for phone numbers, skype names and the like, all mashed-up and intermixed.

How to publish an unpopular book?

I've been thinking about writing a book. It won't be a popular success -- trust me -- but that raises the question of how I should publish it.

I've never written a book, although I've written a lot of papers and edited a couple of conference proceedings and other collections: writing is one of the things academics do for a living. But I've been thinking for a while about writing a book on how to build a programming language. This isn't something that JK Rowling (my neighbour in Morningside) needs to worry will eat into her royalties, obviously, but it's something that'd be of interest to a certain group of people. I've been teaching programming, compilers, language principles and the like for several years, and have sometimes shown classes how to build interpreters and compilers from the ground up. I'd like to share this with a wider audience, and show how the tools and techniques of languages can be used to make a whole class of problems easier and more fun to solve. There's something very liberating and exciting (to me, anyway) about understanding the tools of programming in their most raw, and of being able to change them to suit particular circumstances. It also brings a new perspective to things one might encounter in particular languages, that can encourage experimentation and invention and the re-purposing of tools to new domains.

It's not the writing that's the problem: the problem is the publishing. Clearly, no matter how excited I get about these things, it's going to be a pretty minority interest: not even most computer scientists write compilers. So it's hardly going to be a best-seller. But that then raises an interesting question. Traditional book publishing is about getting visibility and distribution for your work, with a view to maximising circulation, impact and royalties. If there's no money to be had, and the target audience is by definition computer- and internet-aware, are there better ways of getting the same (or better) visibility, distribution and impact, and reaching the audience more effectively than one can by traditional means?

What, in the 21st century, is the most effective way to publish an unpopular book?

In one way the internet answers this question in an obvious way: put a file on a web server. But that still leaves the visibility and impact parts to be solved -- and there are half-a-dozen ways to make the text available on a web server, too. We can split the problem between these two parts, though: how to write and represent the text, and how to let people know it's there.

Distribution and format

Web site. A lot of books have associated web sites, for errata and additional material, sometimes freely available and sometimes behind a paywall. Clearly one could put a whole book up on a site, as well as any associated software, with a suitable licence to allow for downloading and whatever copying seems permissible. I use this approach for this site: all the content is under a Creative Commons licence that allows non-commercial sharing with attribution, and allows re-mixing as long as the derived work is also shared under the same or similar terms.

Most web sites require that you be on-line to read them, although that's not necessarily the case for systems like TiddlyWiki that download in one file. And one can get all the benefits of non-linear browsing and re-purposing by using proper hypertext as opposed to PDF.

E-book. E-books have a lot to recommend them, especially their portability and download-ability. PDF is a popular format, but EPUB is probably a better choice: you get reflowing, hyperlinking and portability to small devices with no effort, in a way that's hard for a PDF.

Of course these formats aren't mutually exclusive, and one could easily come up with a writing system that can generate PDF, EPUB and indeed HTML from the same sources.

Blog. The above are still fairly traditional approaches, varying in terms of delivery medium. What about blogging a book, allowing it to evolve over time?

I can think of one immediate disadvantage, which would be the danger of a lack of flow and a disjointedness that comes from not wrapping-up a work as a single entity. But of course there are some significant advantages: there's no reason one couldn't write large chunks of text and them blog them over time, and refine the text using comments before generating an e-book or re-linking into a more conventional web site.

Wiki/group blog. If we accept the no money/lots of copying philosophy, then perhaps there's no reason to be precious about authorship either. A group blog or a wiki that encourages participation and updating might make sense: a sort of Wikipedia for programming languages, in which chapters can be commented on and edited by a community (if one forms). This might generate a work that's more complete than one I could wrote myself, if one got contributions from the appropriate, knowledgeable people. It could also degenerate into a farce without clear editing guidelines and proper curation: essentially the problems of a normal blog, writ large.

Wikis, and especially Wikipedia, often get trashed by academics. This isn't an opinion I completely share. At their best, wikis harness the best people with an interest in a subject. Their content needs protection from the stupid, vain, deluded, vicious and malicious, but none of that outweighs the benefits of having potentially every expert in the world contributing. A traditional encyclopaedia is not necessarily more reliable -- look up "anthropology" in an early Encyclopaedia Britannica to see how fallible expert opinion is to modern eyes -- and with care there's no reason why a wiki need be less exacting than a more traditional medium. (Encyclopaediae aren't a good way to learn a subject, for which you really need a structured and knowledgeable guide -- but that's another story.)


Visibility subsumes impact, in a sense: if something is very visible and easily-found, then it's popularity is a direct measure of its significance in its community of interest. And if something is highly visible and still unpopular: well, that's just something to live with. We can split visibility between pull and push: people finding what they're looking for versus being told that there's something they might be interested in.

SEO. Search engine optimisation has evolved from being a valuable skill, through a commodity, to an arms race in which sites try to get search engines to notice them and search engines try not to be manipulated away from whatever they regard as their core metric for importance. (PageRank in the case of Google.) Most content managers have SEO packages built-in or available that can help.

Blogs. There are a number of great programming language blogs out there, through which one could solicit help and readers. If the internet does anything, it's demonstrate that any small community or interest is globally large -- or at least large enough to keep most people happy. Even language hackers.

Software. For a book about writing languages, I suspect the most effective advertisement is the software that one can develop with the techniques described, or the tools one could use to follow them. The sincerest recommendation is for the software to be used, found useful, and improved by someone else, who's then willing to share their experiences back with the net.

Having written all of the above, I'm still not sure where it leaves me. I'd welcome any comments or descriptions of experiences before I start putting hand to keyboard in the wrong way. Who'd have thought it's be so complicated? -- although I must say that having these sorts of choices is in itself a major draw, and a great indication of how the web's changing the world.

Call for papers: Programming methods for mobile and pervasive systems

We are looking for papers on programming models, methods and tools for pervasive and mobile systems, for a workshop at the PERVASIVE conference in San Francisco.

2nd International Workshop on Programming Methods for Mobile and Pervasive Systems (PMMPS)


San Francisco, California, USA, June 12, 2011. Co-located with PERVASIVE 2011


Pervasive mobile computing is here, but how these devices and services should be programmed is still something of a mystery. Programming mobile and pervasive applications is more than than building client-server or peer-peer systems with mobility, and it is more than providing usable interfaces for mobile devices that may interact with the surrounding context. It includes aspects such as disconnected and low-attention working, spontaneous collaboration, evolving and uncertain security regimes, and integration into services and workflows hosted on the internet. In the past, efforts have focused on the form of human-device interfaces that can be built using mobile and distributed computing tools, or on human computer interface design based on, for example, the limited screen resolution and real estate provided by a smartphone.

Much of the challenge in building pervasive systems is in bringing together users' expectations of their interactions with the system with the model of a physical and virtual environment with which users interact in the context of the pervasive application.

The aim of this workshop is to bring together researchers in programming languages, software architecture and design, and pervasive systems to present and discuss results and approaches to the development of mobile and pervasive systems. The goal is to begin the process of developing the software design and development tools necessary for the next generation of services in dynamic environments, including mobile and pervasive computing, wireless sensor networks, and adaptive devices.


Potential workshop participants are asked to submit a paper on topics relevant to programming models for mobile and pervasive systems. We are primarily seeking short position papers (2–4 pages), although full papers that have not been published and are not under consideration elsewhere will also be considered (a maximum of 10 pages). Position papers that lay out some of the challenges to programming mobile and pervasive systems, including past failures, are welcome. Papers longer than 10 pages may be automatically rejected by the chairs or the programme committee. From the submissions, the programme committee will strive to balance participation between academia and industry and across topics. Selected papers will appear on the workshop web site; PMMPS has no formal published proceedings. Authors of selected papers will be invited to submit extended versions for publication in an appropriate journal (under negotiation).

Submissions will be accepted through the workshop web site, http://www.pmmps.org

Important dates

  • Submission: 4 February 2011
  • Notifications: 11 March 2011
  • Camera-ready: 2 May 2011
  • Workshop date: 12 June 2011


  • Dominic Duggan, Stevens Institute of Technology NJ
  • Simon Dobson, University of St Andrews UK