Wikileaks as reality television

There's something very 21st century about the Wikileaks/Julian Assange affair.  And not in a good way.

The Wikileaks saga dominated the airwaves for the last months of 2010: the revelation of a huge mass diplomatic communications between the US and the world, drip-fed to newspapers and searched incessantly for data to support each and every pet theory in the world.

The story undoubtedly has enormous colour, and superficially is a story for our time. The data is released on the web, and the authorities strain to close the offending site down. Pressure is put on service providers to withdraw support. This only leads to extensive mirroring of the site, frustrating any attempt to close-off access to the data, while the service providers are the subject of distributed denial-of-service attacks by outraged groups of hacktivists. The site's founder has on-again/off-again troubles with the law, being threatened with everything from espionage to treason in the US (despite not being in the US, and despite not being a US citizen, which would seem to make treason rather a long shot), but is arrested and bailed in the UK on foot of a warrant from Sweden for a seemlingly unrelated matter. It sounds like a soap opera, and I'm rather afraid that that's all it really is.

Let's start by recognising the legitimate tension between a desire for transparent government and a desire for anonymity and even secrecy in government communications. On the one hand it's clearly in the public interest to have the whistle blown on unsavoury or illegal State activities, and a blanket claim that national security trumps this interest is absurd. On the other hand, some of the Wikileaks data identifies individuals who may be endangered by having their names publicised. Anonymity is often essential for people to reveal information; similarly, it's in the public interest to have diplomats be able and willing to express their opinions openly without fear of public censure or ridicule, since the alternative would be to distort the open exchange of ideas. This is the principle that underlies scientific and other meetings held under what in the UK are referred to as Chatham House rules: any comments may be used in any context, but with no individual or institutional attribution.

The Wikileaks disclosures are still on-going, but what strikes me about what's been seen so far is the almost complete absence of anything that would justify disclosure -- or indeed secrecy. It's simply a recitation of diplomatic chit-chat that sometimes supports information already in the public domain but certainly provides nothing of any additional significance. It's surely not a revelation, for example, that Arab governments are concerned about the Iranian nuclear programme, or that UK diplomats suspected that Sinn Féin knew about the Northern Bank robbery by the IRA (to take two stories at random): we either knew or could surmise this already.

Many have made allusions between Wikileaks and the release of the Pentagon papers in 1971, classified information leaked to the New York Times which fuelled the growing sentiment against the Vietnam war. But in that case there actually was information revealed -- the extent of the military's prior analysis and its subsequent dilution -- that could be argued to be of importance. Wikileaks lacks this sense of solidity.

To me, the whole affair feels like a piece of reality television that happens to have happened over the web, happens to have a frisson of illegality, and happens to have a link to the diplomatic and intelligence communities. The chatter that we're seeing is just that: chatter. It's surprising only to the extent that it's unsurprising. There are no smoking guns, no support for conspiracy theories, no examples of significant ineptitude or corruption -- nothing. It is to a real journalistic coup what "Big Brother" is a television documentary.

Perhaps we can draw three conclusions from this, The first is the fascination that the 21st century has with real-world data, regardless of its information content. People watch "Big Brother" despite the fact that most of the time nothing happens, and when something does happen it's probably been contrived by the participants or the producers. But even when nothing is happening it can be obscurely compelling. Wikileaks is similar: no real content, but a compulsion to keep looking just in case. And of course if (for the sake of argument) some interesting revelation does surface, how will we know whether it's real or contrived? How will we tell information from dis-information?

The second conclusion is that data is no substitute for interpretation in context. The Pentagon Papers' significance came from the fact that the journalists involved could see there was a story there, and link it to the rest of the news happening alongside, to the people and organisations involved. This journalistic addition is conspicuous by its absence in relation to Wikileaks. This of course goes against Tim Berners Lee's recently-asserted position that the future or journalism is basically data analysis. I have the greatest respect for Tim, but on this point I think he's badly mistaken. The problem is that real journalism isn't exclusively, or even primarily, about the data: it's about the people, their motivations and behaviours, which often aren't represented in the raw data with which web science concerns itself.

Thirdly, even though the Wikileaks affair is really just an old-fashioned leak to the press, we can see that the web has changed things somewhat. The US authorities moved quickly to attack the site initially hosting the data, but only succeeded in triggering its replication to a enough geographically-distributed sites to defy further suppression. Once data is out there and judged to be significant, it'll stay out there through the independent actions of concerned individuals. This makes further analysis and contextualisation possible, but doesn't guarantee that it'll happen -- and that's where the real value lies.

Inauguration

A couple of weeks ago I attended my professorial inauguration.

In most universities (in the UK and Ireland, anyway), a professorship is just a job that doesn't involve any particular ceremonial: it doesn't come with a degree, and so doesn't require a graduation. In St Andrews, of course, things are rather more formal. All new professors have to attend a St Andrew's Day graduation in order to swear an oath to the university. I should have been inaugurated last year but couldn't make it because of a prior engagement, so this year I attended along with my colleague Aaron Quigley and ten other new profs.

Aaron and Simon

(Photo courtesy of Brad Herbert.)

The day took place in sub-zero temperatures with swirling snow at times, despite St Andrews overall having the mildest climate in the UK: I had to stay up overnight just to make sure I'd make it in, since commuting in from Edinburgh as usual would have been a bit risky.

We started in chapel for a thanksgiving for graduation, where I discovered that the "school song" (or at least the hymn to St Andrew) is sung to the tune of Deutschland uber alles, for reasons that remain unclear. Immediately afterwards was the graduation ceremony, held next door in the university's Younger Hall.

Like most "ancient" universities, St Andrews' graduations are full of symbolism. Most of the important parts of the ceremony are conducted in Latin,although not to the extent of excluding all English as happens in Cambridge and in Trinity College Dublin. The academic procession happens juniores priores, with junior staff preceding senior -- and new professors coming first of all, since they're regarded as nobodies until inaugurated. We sat at the front of the hall while the other academics processed into the stage, and waited while the undergraduate and postgraduate degrees were conferred -- which involves being hit on the head with the birretum, a piece of cloth allegedly taken from John Knox' britches.

Academics always play spot-the-gown at these sorts of events. There are basically two classes of gowns. University officers performing their function wear gowns related to their office, so the Principal, while officiating, wears a black-and-white Principal's gown rather than the one for her Harvard doctorate. Regular academics wear the gown related to their highest degree, with the colours, hoods and patterns being determined by the awarding university. A lot of doctoral degrees involve red and gold; St Andrews doctorates are sky blue. (Mine is a fawn/grey gown and hat with a red hood, being a DPhil from the University of York. It's quite understated in comparison to a lot of others, which suits me fine.)  St Andrews is unusual in also having an undergraduate gown, different to the gowns for graduates, which is bright red and made of fleece rather than cotton. Useful for keeping warm on the pier walk. A surprising number of undergraduates wear their gowns around town, which just goes to show the bond that exists between the university and its students, and isn't something one encounters in many other universities.

The inauguration of new professors involved us all coming onto the stage and being asked to swear the oath to uphold the rules and traditions of the university (in Latin, of course -- the important word was polliceor, "I so promise"). At this point I discovered the term for computer science in Latin: computandi ope machinali promovendae. After that we were each presented with a book as a token of our new office (which they took back immediately after we left the stage, although they did buy me a nice copy of Hugh Trevor-Roper's "History and the Enlightenment" later, with a commemorative bookplate in it). The Principal then addressed us, in Latin again: Quod felix fortunatumque sit, spartam nacti estis: hanc exornate ("I wish you all happiness and good fortune. You have been allotted Sparta: do it credit."). And it was done. We processed out of the hall (in reverse order, seniores priores, this time) and back to the main quad of St Salvator's college for photos in the snow and lunch. There was a garden party in the afternoon (thankfully in a heated marquee), and a formal graduation dinner in the evening.

Several years ago, after I got married, someone asked me whether it had made a difference to how I felt. I replied that I'd expected the answer to be "no", in that the important thing was the deciding to stay together and not the piece of paper and the public promise -- but that in fact it did make a huge difference. The public act had a significance in and of itself (for me, at any rate), and did make the whole thing feel more real and more certain. Slightly surprisingly, inauguration feels the same way. In practice it makes no difference to my holding or doing the job, but the public act gives a significance to it that I wasn't expecting (again). The common theme between the two situations may be that the commitment is two-way, between myself and my university colleagues and students in this case. A number came up to me afterwards to welcome me -- despite my having been here for over a year -- which makes one feel rather wanted.

There's a lot to be said for ceremonial like this. It's important for the students, of course, to mark their achievement: but it's perhaps equally important for the academic staff, for holding an institution together, especially at times like these when there's something of a feeling of being under siege in a world that doesn't necessarily understand or appreciate what universities are for or what we're trying to accomplish. The coming-together to mark key events, and to affirm that a university is a community with shared goals, is something that can only be good for morale, and is something we should do more of.

Call for papers: Dynamic Distributed Data-Intensive Applications, Programming Abstractions, and Systems

We're looking for papers on the topics of programming with large, dynamic data for a workshop co-located with HPDC in San Jose next year.
Call for papers

Workshop on Dynamic Distributed Data-Intensive Applications, Programming Abstractions, and Systems (3DAPAS)

To be held in conjunction with HPDC-2011, 8 June 2011, San Jose, CA
There has been a lot of effort in managing and distributing tasks where computational loads are dominant. Such applications have after all, been historically the drivers of "grid" computing.  There has, however, been relatively less effort on tasks where the computational load is matched by the data load, or even dominated by the data load. For such tasks to be able to operate at scale, there are conceptually simple run-time trade-offs that need to be made, such as determining whether to move data to compute versus keeping data localized and move computational tasks to operate on the data in situ, or possibly neither, and with data regenerated on-the-fly. Due to fluctuating resource availability and capabilities, as well as insufficient prior information about application requirements, such decisions must be made at run-time. Furthermore, resource, connectivity and/or storage constraints may require the data to be manipulated in-transit so that it is "made-right" for the consumer. Currently it is very difficult to implement these dynamic decisions or the underlying mechanisms in a general-purpose and scalable fashion.
Although the increasing volumes and complexity of data will make many problems data load dominated, the computational requirements will still be high.  In practice, data-intensive applications will encompass data-driven applications.  For example, many data-driven applications will involve computational activities triggered as a consequence of independently created data; thus it is imperative for an application to be able to respond to unplanned changes in data load or content.  Therefore, understanding how to support dynamic computations is a fundamental, but currently missing element in data-intensive computing.This workshop will operate at the triple point of dynamic and distributed and data-intensive (3D) attributes. This workshop will operate at the triple point of dynamic, distributed and data-intensive (3D) attributes. It will also focus on innovative approaches for scalability in the end-to-end real-time processing of scientific data. We refer to 3D applications as those are data-intensive, need to support and respond to dynamic data, and, either are fundamentally, or need to be, distributed. We are interested in papers that span the spectrum from the design of cyberinfrastructure to support 3D applications, to novel application examples. We are also looking to bring researchers together to look at holistic, rather than piecewise, approaches to the end-to-end processing and managing of scientific data.

3DAPAS builds upon a 3 year research theme on Distributed Programming Abstractions (DPA), which has held a series of related workshops (see: DPA Past Events) including but not limited to e-Science2008, EuroPar 2008 and the CLADE series. 3DAPAS will also draw on ideas from the ongoing 3DPAS Research Theme funded by the NSF and UK EPSRC.

Topics of interest include but are not limited to:

  • Case studies of development, deployment and execution of representative 3D applications
  • Programming systems, abstractions, and models for 3D applications
  • What are the common, minimally complete, characteristics of 3D application?
  • What are major barriers to the development, deployment, and execution of 3D applications? What are the primary challenges of 3D applications at scale?
  • What patterns exist within 3D applications, and are there commonalities in the way such patterns are used?
  • How can programming models, abstraction and systems for data-intensive applications be extended to support dynamic data applications?
  • Tools, environments and programming support that exist to enable emerging distributed infrastructure to support the requirements of dynamic applications (including but not limited to streaming data and in-transit data analysis)
  • Data-intensive dynamic workflow and in-transit data manipulation
  • Abstractions and mechanisms for dynamic code deployment and "moving the code to the data"
  • Application drivers for end-to-end scientific data management
  • Runtime support for in-situ analysis
  • System support for high end workflows
  • Hybrid computing solutions for in-situ analysis
  • Technologies to enable multi-platform workflows
Submission Requirements: Authors are invited to submit technical papers of at most 8 pages in PDF format, including all figures and references. Papers should be formatted in the ACM Proceedings Style and submitted via EasyChair. Accepted papers will appear in the conference proceedings, and will be incorporated into the ACM Digital Library.

Submission of a paper implies that at least one author will attend the workshop to present the paper, if it is accepted.

Papers must be self-contained and provide the technical substance required for the program committee to evaluate the paper's contribution. Papers should thoughtfully address all related work. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or a journal. See the ACM Prior Publication Policy for more details.

Important Dates:
Submissions Due: 31 Jan 2011
Paper Decisions Announced: 28 Feb 2011
Final Camera-Ready Papers Due: 24 Mar 2011
Workshop Date: 8 June 2011
(all dates are firm)

Organizers:

  • Daniel S. Katz, University of Chicago & Argonne National Laboratory, USA
  • Shantenu Jha, Louisiana State University, USA & e-Science Institute, UK
  • Jon Weissman, University of Minnesota, USA
Programme Committee Members:
  • Gabrielle Allen, Louisiana State University, USA
  • Malcolm Atkinson, eSI & University of Edinburgh, UK
  • Henri Bal, Vrije Universiteit, Netherlands
  • Jon Blower, Reading e-Science Centre, University of Reading, UK
  • Shawn Brown, University of Pittsburgh & Pittsburgh Supercomputing Center, USA
  • Simon Dobson, University of St. Andrews, UK
  • Dennis Gannon, Microsoft, USA
  • Keith R. Jackson, Lawrence Berkeley National Lab, USA
  • John R. Johnson, Pacific Northwest National Laboratory, USA
  • Scott Klasky, University of Tennessee & Oak Ridge National Laboratory, USA
  • Bertram Ludäscher, University of California, Davis, USA
  • Abani Patra, University of Buffalo, USA
  • Manish Parashar, Rutgers & NSF, USA
  • Omer Rana, Cardiff University, UK
  • Joel Saltz, Emory University, USA
  • Domenico Talia, Universita' della Calabria, Italy

Call for papers:Software Engineering for Adaptive and Self-Managing Systems

Papers are invited in all aspects of software engineering for adaptive systems, for the SEAMS symposium in Hawaii in May 2011. The deadline is now quite close.

CALL FOR PAPERS

6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2011) (Sponsored by ACM SIGSOFT and IEEE TCSE)

Waikiki, Honolulu, USA 23-24 May 2011

http://2011.seams-symposia.org/

THEME

An increasingly important requirement for a software-based system is the ability to self-manage by adapting itself at run time to handle changing user needs, system intrusions or faults, a changing operational environment, and resource variability. Such a system must configure and reconfigure itself, augment its functionality, continually optimize itself, protect itself, and recover itself, while keeping its complexity hidden from the user.

The topic of self-adaptive and self-managing systems has been studied in a large number of specific areas, including software architectures, fault-tolerant computing, robotics, control systems, programming languages, and biologically-inspired computing.

The objective of this symposium is to bring together researchers and practitioners from many of these diverse areas to engage in stimulating dialogue regarding the fundamental principles, state of the art, and critical challenges of self-adaptive and self-managing systems. Specifically, we intend to focus on the software engineering aspects, including the methods, architectures, algorithms, techniques, and tools that can be used to support dynamic adaptive behavior that includes self-adaptive, self-managing, self-healing, self-optimizing, and self-configuring, and autonomic software.

TOPICS OF INTEREST

We are interested in submissions from both industry and academia on all topics related to this important area. These include, but are not limited to:

  • formal notations for modeling and analyzing software self-adaptation
  • programming language support for self-adaptation
  • reuse support for self-adaptive systems (e.g., patterns, designs, code, etc.)
  • design and architectural support for the self-adaptation of software
  • algorithms for software self-adaptation
  • integration mechanisms for self-adaptive systems
  • evaluation and assurance for self-* systems (e.g., run-time verification)
  • modeling and analysis of adaptive systems (e.g., run-time models, cost-benefit analysis, architectural styles and patterns, requirements)
  • decision-making strategies for self-adaptive and self-organizing systems support for run-time monitoring (for requirements, design, performance, etc.)
  • model problems and exemplars
The following application areas are of particular interest:
  • mobile computing
  • dependable computing
  • autonomous robotics
  • adaptable user interfaces
  • service-oriented systems
  • autonomic computing
PAPER SUBMISSION DETAILS

We are soliciting three types of papers: research papers and experience reports (up to 10 pages, ACM SIG Proceedings Format) and position papers for new ideas (up to 6 pages, ACM SIG Proceedings Format). Research papers should clearly describe the technical contribution and how the work has been validated. Experience reports should describe how an existing technique has been applied to real-world examples, including lessons learned from the experience. New idea papers provide an opportunity to describe novel and promising ideas and/or techniques that might not have been fully validated. All submitted papers will be reviewed by at least three program committee members. Papers must not have been previously published or concurrently submitted elsewhere. The accepted papers will appear in the symposium proceedings that will be published as ACM conference proceedings.

IMPORTANT DATES

Submission deadline: 12th December 2010

Author notification: 15th February 2011

Camera ready copy: 1st March 2011

SYMPOSIUM ORGANIZATION

General Chair: Holger Giese, HPI/Univ. of Potsdam, Germany

Program Chair: Betty H.C. Cheng, Michigan State University, USA

Publicity Chairs: Basil Becker, HPI/Univ. of Potsdam, Germany; Thomas Vogel, HPI/Univ. of Potsdam, Germany

Program Committee:

  • Colin Atkinson University of Mannheim, Germany
  • Robert Baillargeon Panasonic Automotive, USA
  • Luciano Baresi Politecnico di Milano, Italy
  • Nelly Bencomo University of Lancaster, UK
  • Yuriy Brun University of Washington, USA
  • Vinny Cahill Trinity College Dublin, Ireland
  • Shang-Wen Cheng Jet Propulsion Laboratory, USA
  • Simon Dobson University of St. Andrews, UK
  • Gregor Engels University of Paderborn, Germany
  • Cristina Gacek City University, UK
  • David Garlan Carnegie Mellon University, USA
  • Kurt Geihs University of Kassel, Germany
  • Carlo Ghezzi Politecnico di Milano, Italy
  • Svein Hallsteinsen SINTEF, Norway
  • Paola Inverardi University of L'Aquila, Italy
  • Jean-Marc Jezequel IRISA-INRIA, France
  • Gabor Karsai Vanderbilt University, USA
  • Jeff Magee Imperial College London, UK
  • Nenad Medvidovic University of Southern California, USA
  • John Mylopoulos University of Trento, Italy
  • Hausi Müller University of Victoria, BC, Canada
  • Sooyong Park University of Sogang, S. Korea
  • Anna Perini FBK-IRST, Center for Information
  • Technology, Italy
  • Masoud Sadjadi Florida International University, USA
  • Onn Shehory IBM-Haifa Research, Israel
  • Roy Sterritt University of Ulster, UK
  • Danny Weyns Katholieke Universiteit Leuven, Belgium
  • Andrea Zisman City University, UK
Steering Committee:
  • Betty H.C. Cheng Michigan State University, USA
  • Rogério de Lemos University of Kent, UK
  • David Garlan Carnegie Mellon University, USA
  • Holger Giese HPI/Univ. of Potsdam, Germany
  • Marin Litiou York University, Canada
  • Jeff Magee Imperial College London, UK
  • Hausi Müller University of Victoria, Canada
  • Mauro Pezzè University of Lugano, Switzerland, and
  • University of Milan Bicocca, Italy
  • Richard Taylor University of California, Irvine, USA
FURTHER INFORMATION

Symposia-related email should be addressed to seams2011@seams-symposia.org

It's hard being a man in Middle Earth

The Lord of the Rings is about men, their deeds and courage in the face of seemingly overwhelming odds. Which makes it strange that they in the main get an incredibly raw deal.

I have to say I absolutely love the books: The Hobbit, The Lord of the Rings (LOTR), The Silmarilion, The Unfinished Tales, and the other works in the same area. The films were cinematographic masterpieces, although not sufficiently true to the original for my tastes. The problem I have is one of motivation for the men involved.

If you're a man in Middle Earth, you're pretty much guaranteed to be inferior in some significant way to any other intelligent creature you come across. You can't be wise, in any absolute sense: the elves and the wizards live essentially forever, and they have wisdom sewn-up. You can't be strong, because dwarves and elves (again) are stronger and have more stamina. You can't be artistic, because elves (yet again) are the undisputed masters of all the high arts, and will condescend to teach you what crumbs of their learning you can grasp during your incredibly short lifespan. You can't even be relaxed and laid-back, because hobbits will always make you look stressed-out. Evil's not an option, as Sauron is the ultimate baddie and has legions of orcs (who are corrupted elves in the books, and so have many of the advantages of elves without the goodness).

If you're a man, all you can do is die gloriously.

LOTR is essentially an epic poem in the mould of the Iliad. At the risk of gross over-simplification, epic poems only have three sorts of characters:

  • Heros, who do all the things worth reporting
  • Love interest, who are pursued (and generally won) by heros
  • Arrow-fodder, who are killed by heros
In the Iliad, Achilles, Agamemnon, Paris, Hector and a few others fall into the first category; Helen into the second; and everyone else into the third. (Menelaus is a kind of not-quite-major character who's not completely insignificant, but he's there to provide an excuse for the war in the first place.) Arrow-fodder comprise all the minor characters: they can be brave or dastardly, but typically only occupy the very edges of the story and won't have their characters developed any more than is necessary to set off the heros and make it clear why they have to be killed in their particular way.

Compare this with LOTR. The major characters are obviously Frodo, Gandalf, Aragorn and the rest of the fellowship, plus Saruman. There's Arwen, who (in the book at least, although not the films) is just love interest, and Eowyn. And then there are the minor characters: hordes of orcs to slaughter, Men of Rohan and Gondor — and that's about it.

There are a small number of less-than-major characters: Denethor, Faramir (who's essentially just another love interest to get Aragorn off the hook with Eowyn), Treebeard. And there's another population of less-than-major characters who are major elsewhere: Bilbo, Tom Bombadil, Galadriel, Elrond. All these are bit-players in LOTR but have a major part in either The Silmarillion or one of the other tales. (Elrond is something of an exception, in that his main claim to fame is having been around at lots of significant events. He just never gets a chance to be a protagonist.)

But the thing with being a man in Middle Earth is the way in which your actions are so circumscribed. No man in LOTR actually dies anything other the gloriously. Even Boromir is redeemed through his death. Even the evil men are happy to go down fighting. It's as though men's sole virtue is to have a bad time and then die.

Aragorn escapes this fate, and is the only heroic (in the epic sense) man in the story, but even he is doomed to failure by his mortality, and you can't escape the suspicion that Gondor will collapse as soon as he's dead. From the elves' perspective, destroying Sauron is an absolute good, and they can then all leave for the lands over the Sea which are a remnant of earlier, better times; from men's perspective, it'll probably be a Pyrrhic victory at best.

In many ways LOTR is a story about passing. The elves' time in Middle Earth is past — although I think the films grotesquely overstate their predicament compared to the book — and men can't ever build anything permanent because they just don't have the wisdom/lifespan/art/strength/goodness to cut it. That of course is what sets LOTR apart from "happily ever after" fantasy (like David Eddings' works). By the end of the book, though, everything looks drab, and without Sauron there won't even be any worthwhile evil left. You have to wonder what will men find to motivate themselves, when the best is all in the past and even glorious and worthy death is denied them. One can imagine a lot of drunken fireside reminiscence going on.

If that weren't bad enough, anyone who's read The Silmarillion knows that Middle Earth has been passing for a very long time. Even though in LOTR Sauron is the ultimate in badness, in the greater scheme of things he was only a servant of Morgoth, the really, really ultimate baddie. Even an evil that threatens to engulf the entire world in the Third Age is only a shadow of what evil used to be like in the First Age — modern evil just can't cut it.

It seems a pretty depressing view of history and heroism, but maybe that's the point: that the Fourth Age will be a post-heroic let-down with everyone left dissatisfied and wishing for the days when there were orcs and dragons and Black Riders and magic.

The view of the internet, 15 years ago

I was just sent a link to an article from 1995 on how the internet is over-hyped. It's a fascinating read, not just in terms of the things it gets wrong but also of the ways in which the views expressed were plausible at the time.

The article in question is "The Internet? Bah!" by Clifford Stoll, and appeared in Newsweek on 27 February 1995. For those whose memories of computer culture don't stretch back this far, Stoll has form. He was a system manager at Lawrence Berkeley laboratory in California during a serious attempt to crack US military computers -- one of the first examples of modern cyber-warfare. Rather than shut-out the crackers when he found them, he instead worked alongside a largely uncomprehending law enforcement community to help track them down, and brilliantly tells the story in his book The cuckoo's egg. He then got concerned about the over-selling of computer technology for his next book, Silicon snake oil. His Newsweek article is in this latter vein.

The crux of Stoll's argument is that the internet will never replace traditional off-line activities like shopping for books, accessing a newspaper and the like. The internet is simply

...one big ocean of unedited data, without any pretense of completeness. Lacking editors, reviewers or critics, the Internet has become a wasteland of unfiltered data. You don't know what to ignore and what's worth reading.
It's barely worth noting that many of these arguments have been invalidated by events. That's hardly surprising, and while a technologist of Stoll's standing should perhaps have been more wary about some of his predictions, the more important point is how the internet evolved to address points that, from a 1995 perspective, seem completely natural.

Stoll's comments on electronic publishing are perhaps the most interesting:

How about electronic publishing? Try reading a book on disc. At best, it's an unpleasant chore: the myopic glow of a clunky computer replaces the friendly pages of a book. And you can't tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we'll soon buy books and newspapers straight over the Intenet. Uh, sure.
And, of course, he's right: who would want to read a book on a 1995 green-screen, or indeed on one of those then fairly new-fangled windowed displays? That's only changing now, where displays have similar resolution to paper as far as the eye is concerned, and when e-paper displays can be read in direct sunlight -- and when one can take an iPad or a Kindle to the beach, albeit rather carefully, and buy books not only straight over the internet but even completely untethered over the cellphone network. A similar argument can be made to take down the article's discussions about e-shopping for airline tickets and restaurant reservations, e-government and access to information, and so forth.

But the fact remains that Stoll's analysis of the internet circa 1995 wasn't too far off the mark. Where did things change? I suspect the clue is in the last paragraph:

What's missing from this electronic wonderland? Human contact. ... Computers and networks isolate us from one another.
Again, not an unreasonable view in 1995. No-one I can remember really suggested that social networks would flourish, and indeed come to almost define the web and internet in the early 21st century. And that's rather surprising, given that the first "killer app" for the internet was e-mail, and not (as was expected) scientific data exchange: a social technology rapidly took off in a place where no such socialisation was expected. The surprise is that we were surprised again -- and I include myself in that surprise -- when the history of the internet clearly showed that it's users see it as a social enabler as much as, if not more than, as an information source.

Clearly we shouldn't abandon the sorts of critical comments that Stoll was making, or worry that predictions about technology are almost always overtaken by events we had no idea were coming. But it does mean that whenever we hear comments on the social value of technology and the impact it will have on society -- as is happening over internet reading and other technologies at the moment -- we should pause and think whether the negatives identified are somehow intrinsic, or whether they rest solely on the systems as currently deployed and conceived. We're familiar with the idea of a network effect. The strongest network effects are in the abilities of people to re-use and re-purpose technology beyond the bounds conceived of by its inventors. It's only really surprising when this doesn't happen.

The three academic stereotypes

You encounter a lot of different personalities in academia, but when you get right down to it they all seem to fall under three basic stereotypes.

OK, I admit it's a gross over-simplification, but here we go:

The young gun

Young guns are the change-makers of academia: the people who want to change anything that doesn't work and replace it with something better -- or at least something different that can be tried out and tested to see if it is better. This often makes them talented researchers (although not necessarily more so than stalwarts, the next stereotype), but they're typically found in larger groups, taking on larger projects and collaborations.

Young guns can be of any age. They tend to be young, of course -- often junior staff who are keen and un-jaded, who want to move their discipline forward and teach it as well as it can be taught. They're often found pushing for new modules, new degrees, new ways of teaching and assessment -- and for promotion. But they don't have to be calendar-years young: the gun-iest young gun I've ever met was in his late 40s when I first went to work for him, is now approaching retirement, and still has more ideas and energy than most people a third his age. These people feel young even when they aren't.

As is probably apparent, young guns can be hell to work with, since for them everything is potentially up for grabs. They're often (but not always) better at idea-forming than at execution, and often (but not always) lack the long-term detail-orientation to make sure that their ideas work out. They are often (but not always) egotists, who may not recognise when a change they've become passionate about is wrong or not working. But without them there's no-one to drive change forward and make sure that schools and disciplines stay fresh and relevant.

The stalwart

The majority of people in academia are stalwarts: people who have clear ideas about the things they want to do and how to do them, but are essentially positive about their activities. They might not lead change, but they'll row-in alongside if they like it and will accommodate to anything that's broadly agreed.

This applies equally to teaching and research. The typical stalwart will teach modules carefully, perhaps changing the content and delivery only slowly if left to their own devices but being perfectly open to updating to include new ideas. In research they will typically be found slightly off the mainstream with a small group of students (often only one at a time), not going for large grants or big collaborations but being solid supervisors and contributors at a small-to-medium scale. Often they come up with great ideas, because they pursue a line of research solidly over a long period and so become world experts. The ideas might not be widely circulated, and so it's easy to underestimate the sophistication stalwarts bring to their work. They may need prodding to publish appropriately, but they'll then address tat problem as effectively as everything else they do.

Stalwarts are essentially positive people, working within a well-defined comfort zone. They are the backbone of any research project or school, and need to be appreciated and rewarded appropriately. They also need to be listened to carefully, since they provide a stability and a sanity that young guns often lack, and will make sure that changes are properly thought through and executed upon.

The twisted nay-sayer

The first two stereotypes are basically positive, but it all goes down hill with the third: the twisted nay-sayer. (I can't claim credit for the great name, incidentally, which is due to Paddy Nixon.) Twisted nay-sayers oppose all change, no matter of what kind and no matter how motivated, and will to continue to oppose changes even long after the decision has been made and the time for action has passed.

On first acquaintance, a twisted nay-sayer often seems to be someone who's stuck-in-the-mud, after an easy life, and not wanting to have the hassle of changing -- a bit like a rather negative stalwart. But this is to overlook the twisted part, which will not only avoid change but actively scheme against it, or to reverse it afterwards. It's this essential negativity that sets this type apart from others. They can, perhaps surprisingly, be excellent researchers, but they'll also be constantly highlighting their successes to anyone who'll listen, even when those successes are long in the past. When faced with a new field or innovation they'll point out that it's really just a poor re-discovery of something that was current years ago, or just an instance of some pet area of theirs that the new innovators should really have found out more about.

The two things to remember about twisted nay-sayers are that they are egotists, and that they are made, not born -- the young gun's dark shadow. You make them by thwarting the expectations they have for their careers. This can happen in two ways: either their expectations were unreasonable, and reality has intruded; or their expectations were completely reasonable but were thwarted by circumstance, malice or indifference. A particularly common case is someone who's been repeatedly passed over for a promotion they think (rightly or wrongly) that they deserve. The repeated denial of their aspirations eventually causes them to give up, turn their back on the future they can't have -- and then rail against fate and everything that comes afterwards.

How, then, are we to deal with the different kinds of academics? Most people fall into some compromise between categories (stalwart with young-gun moments, for example), but clearly one needs to understand an individual's primary motivations in order to know where they're coming from. I think the trick is to make sure a school keeps, listens to -- and occasionally reins-in -- its young guns; recognises and rewards its stalwarts; and tries hard not to grow any twisted nay-sayers.

All these activities are fraught with danger -- especially in academia, where we lack most of the levers of control that normal organisations have. A school can't typically promote on its own recognisance. Not promoting those who feel they deserve it risks overlooking a young gun and having them leave, or (worse) stay, but mutate into a twisted nay-sayer. On the other hand, many promotion boards over-value their young guns and ignore their stalwarts, who then feel under-recognised. Doing so can destroy the stability of a school and can lead fragmented research programmes, and to teaching being good at the edges but lacking a proper core. It's also worth remembering that some people in senior academic positions are extremely conflict-averse and so will cave-in to pressure from twisted nay-sayers in the interests of consensus -- not realising that consensus is neither possible nor desirable, and that acquiescing will only lead to more obstructionism, because it's about the obstruction, not the particular issue at hand. It takes confidence to say yes to experimental change -- but equally it takes confidence to say no when necessary.

Call for papers on managing federated networks

We are seeking papers on the issues involved in managing federations of systems, those that cross enterprise boundaries.

ManFED.COM 2011

1st IFIP/IEEE Workshop on Managing Federations and Cooperative Management
Co-located with IM 2011 in Dublin, Ireland.
May 23, 2011
Management approaches that can be applied across organisational boundaries are increasingly important in a wide range of application areas. These range from algorithmic approaches which adapt to the observed behaviour of third-party systems, based on game-theoretic approaches or other predictive models, to explicit organisational federations which adopt coherent solutions and management models to facilitate interoperability among multiple independent organisations.

There are a number of significant, common, complex issues which must be addressed in all technologies and applications that involve federated organisations – how to enable secure governance in the absence of a single, central point of authority; how to achieve semantic interoperability in the absence of common schema; how to provide effective access control in the absence of common user and role models; how to provide analytics and support for effective decision making;  how to adapt to environments that can be highly dynamic as well as highly heterogeneous; how to construct and maintain a common inter-domain governance model in the presence of highly diverse local governance infrastructures .

This workshop will, for the first time, bring together researchers from a broad array of application and technical areas who are concerned with cross-domain management. It will draw out common themes, problems and issues encountered, and the solutions being designed to deal with the problems of managing information systems that span autonomous domains. It will aim to provide the basis for a common understanding and common approaches to inter-domain management and governance that synthesises the insights and best of breed solutions being developed in the diverse areas in which these problems are encountered.

TOPICS TO BE ADDRESSED, BUT NOT LIMITED TO:
Cross Domain and Federated Management Issues in the following areas:
  • Governance mechanisms for federated environments, e.g. Policy Based Management
  • Collaborative management, algorithmic adaptation, game theoretic approaches, predictive modeling.
  • Modelling cross-domain relationships, i.e. Information models, formal specifications, languages for federation
  • Distributed trust management and federated security systems
  • Data federation
  • Semantic technologies, semantic mapping and linked data
  • Information security in federated environments
  • Cloud & grid computing management
  • Software engineering for federated systems, i.e. tool chains, design by contract, model driven engineering and design patterns
  • Model driven approaches for the generation of adaptive inter-domain relationships
  • Adaptive analytics for the support of managing complex, dynamic multi-domain solutions
PAPER SUBMISSION
Authors are invited to submit full papers (8 pages) describing the original work. All manuscripts must be written in English and should be prepared in IEEE style. All submitted papers will be reviewed by the ManFed.CoM Technical Program Committee.

For the review, all the papers should be submitted in PDF format through the ManFed.Com page on the JEMS system (https://jems.sbc.org.br/manfed2011), filling every item and uploading the respective papers.

The contributed papers, after being reviewed and accepted by ManFed.CoM referees, will be published in the Conference Proceedings that will be included in the IEEE Conference Publication of IM 2011 and will be available on IEEE Xplore. The papers will also be indexed and abstracted by several databases such as INSPEC, Engineering Index (EI), SCOPUS, Conference Proceedings Citation Index (CPCI), etc.

Finally, the organizing committee are currently in discussions with the editors of leading network and service management journals regarding the publication of a special issue to include best submissions from the workshop.

DEADLINES FOR PAPERS
  • Full Paper Submission: 15th December 2010 (midnight GMT)
  • Acceptance Notification: 30th January 2011 (midnight GMT)
  • Camera-Ready Manuscripts Due: 15th February 2011 (midnight GMT)
TECHNICAL CO-CHAIRS
  • Kevin Feeney, TCD
  • Joel Fleck, HP
GENERAL CO-CHAIRS
  • Rolf Stadler, KTH
  • Brendan Jennings, TSSG
FURTHER INFORMATION

Computer scientists as university principals

I was surprised to discover a computer scientist as principal of a world-leading university the other day. I was even more surprised to discover that there have been several others.

Now I suppose I should first admit that I don't know why I'm surprised by this -- but I am. In most institutions computer science very much takes a back seat compared to other, more established Schools: physics, mathematics, biology, history and the like. It's hard to see why this should be the case, especially given computer science's central role in the new science and the fact that it's often one of the highest-earning Schools in terms of research and innovation income (although St Andrews is unusual if not unique in having a humanity -- international relations -- as it's highest-earning School, and from which we've drawn our current Principal).

The person I came across was John Kemeny, who was president of Dartmouth College in New Hampshire through the 1970s. Kemeny is one of the co-inventors of the BASIC programming language, which provided me (and many others) with their first introduction to computer programming. BASIC was invented at Dartmouth, of course, so it's perhaps unsurprising that he should have risen to such a position of influence. It's hard to over-estimate how important BASIC was, in a world then populated by low-level assembly code or compilers of dubious quality: the first Pascal compiler I used used to spit-out assembly code so you could go through it, optimise and correct it by hand. As an interpreted language, BASIC provided a far simpler and more accessible introduction to what computers were capable of, and even on early 8-bit microcomputers was fast enough to be used for both serious applications and games.

Having tweeted my surprise at this, I was then told about other computer scientists who've led -- or indeed lead -- universities:

  • Maria Klawe (algorithm design, accessibility), President of Harvey Mudd College
  • Tim O'Shea (computer-assisted learning), Principal of the University of Edinburgh
  • Ewan Page (mainframe pioneer), Vice-Chancellor of the University of Reading
  • John Hennessy (processor design), Provost and then President of Stanford University
  • Jane Grimson (databases, health informatics), Vice-Provost of Trinity College Dublin
  • Jeff Vitter (algorithm design), President of the University of Kansas
And of course an honourary mention for:
  • Paddy Nixon (pervasive computing), Vice-Principal for Research, University of Tasmania
Before anybody asks, this is not a tradition I have the slightest interest or intention of following in -- or indeed the ability to do so. But it's great to see that techies can and do aspire to the top job.

The first year

Today marks the first anniversary of my moving to St Andrews. What have I learned since then?

It's a strange feel to be reflecting on a year of my life, not least because it doesn't feel like a year. Sometimes it feels like a lot less: I still feel very attached to Ireland, and I find that I spend a lot of time comparing the Scottish experience to y experiences of the previous twelve years. But in other ways it feels a lot longer than a year, in that I think I've found a professional home in St Andrews that's exceptionally well-suited to my way of researching and teaching.

So what are the differences? There are several things that spring out. Firstly there's the size of the place -- or lack of it. The university has around 6,000 undergraduate students and maybe 2,000 graduate students, so it's significantly smaller than UCD -- about the same size as Trinity College Dublin, I suppose. This has a corresponding impact on class sizes, where a 30--40 student second-year class, and maybe 10--15 (or less) in third and fourth years, is considered perfectly normal. That in turn leads to a more individual and interactive style of teaching.

The second impact of size is in research, and especially in multi-disciplinary research. St Andrews is so small that one can know everyone (or at least anyone you want to), and can find and gain access to people doing research in whatever topics there are in the university that are of interest. In the past year I've interacted with marine scientists, astronomers, mathematicians, geographers, psychologists, medics and others, on a basis that will probably lead to some sort of proposal for collaboration or funding. One can do that in any full-spectrum university, of course, but size does make a difference: the intimacy of St Andrews, the fact that nowhere is more than a fifteen-minute walk away, makes it so much easier to interact. In a larger institution, and one with a larger computer science contingent, there's a strong tendency to remain within a smaller comfort zone that's not conducive to multi-disciplinary collaborations. I think we'll be able to leverage our smallness.

Thirdly, St Andrews takes teaching way more seriously than any other university I've worked in (or studied in, for that matter). All universities claim that teaching is a core part of their mission, of course, but it often doesn't get treated with the same urgency or seriousness as research. That's in part a function of how we're evaluated: both individual staff promotions and the most popular global university rankings are heavily biased towards research excellence, and staff and managers inevitably respond to those incentives. But I suspect it's more than that. St Andrews' processes are very focused on teaching, as is the academic culture, in a way that'd uncommon in my experience (which is limited to the UK and Ireland, of course). I do all my own teaching, all my own marking, and participate in small-group tutorials both for and beyond the modules I teach myself. Moreover the processes of assessment, tracking and evaluation of students' progress have tool support and are monitored from both the School and the centre. Other universities I've worked in don't have this degree of monitoring -- or indeed any monitoring. That doesn't mean that teaching isn't done well in those places, of course, but it does indicate where a university's priorities lie. (In case this sounds like altruism, it isn't: the National Student Survey results pull St Andrews up the rankings that take account of student satisfaction, and mean that the university can legitimately lay claim to offering an excellent experience to prospective students. It's a good example of the university taking a broader and longer-term view than other institutions.)

Following on from this, Scottish universities teach degrees with a "broad curriculum" in which students take a rather general two-year sub-honours programme before specialising into a further two-year honours programme. This provides  a broader base for students and avoids too-early specialisation, which I think is a good idea. I hadn't quite appreciated what a difference it makes in practice until earlier this week, when I did a lecture on the history of the internet and what its evolution means for society in terms of publishing, privacy, trust and access to information. I've done technical lectures like this to computer science students before -- I've used the evolution of the internet as a case study of large-scale systems design for software engineering students, for example -- but this was an introductory for first-year students from across the university, both sciences and humanities. That's not something that happens very often these days: in fact, in twelve years as an academic I've never lectured a broad-ranging class like that before.

There's something rather exciting about being able to address broader questions of technology's impact on society, and to set essay-style questions, when one is used to the more technical style of scientific lecturing. Not only does it allow a more far-reaching and questioning style of teaching, and the associated invitation to oneself to think through the broader questions: it also feels like it might have an impact, however small and however subtle, on a wide range of students who'll perhaps never encounter computer science again -- but who will undoubtedly be affected by it profoundly as part of its impact on society. It was particularly nice to bring the recent discussions about the internet's effects on learning to their attention, as well as to talk about sensing and its effects on privacy.

Are there any things not to like? The lack of a senior common room is a little peculiar in a university of this age, only partially made-up for by a staff dining club that meets only infrequently. There are actually very few social occasions for staff across the university, which is a shame given the effort put in to student societies and the student experience: some more activities targeted at academics and researchers would be welcome. St Andrews itself is also somewhat remote from the rest of Scotland, more so than I first thought from looking on the map, but we've addressed this by moving to Edinburgh, which has everything one could want from a culturally vibrant city. I'm looking forward to what the next year may bring.