Laureation for Professor Dana Scott

I had the honour (and the great personal pleasure) of inviting the Vice-Chancellor to bestow an honorary degree upon Dana Scott, the inventor of some of the most influential ideas in computer science.

Vice-Chancellor, I have the privilege to present Professor Dana Scott for the degree of Doctor of Science, honoris causa.

Vice-Chancellor, colleagues, friends, ladies and gentlemen:

For millennia, people have performed calculations, sometimes changing the way we live or understand the world. Many of these calculations have involved long, complicated sequences of actions — what we now refer to as algorithms. But it was only in the 1930s that researchers such as Alonzo Church, John von Neumann, Alan Turing, and others formally studied how we perform calculations, which rapidly opened-up the mechanisation of such operations and led to what we now know as computer science.

What does it mean to describe a calculation? For Turing, it meant designing an ideal machine whose small set of simple operations could perform computation — an operational view of computing that allows machines to perform tasks previously thought to require humans. But we can also think of computation independent of mechanisation, where mathematics can be applied to studying computation, and a theory of computation becomes available for the study of mathematics, physics, and other disciplines. And when we take this view, we are making use of ideas that owe their modern existence to the work of Dana Scott.

Scott was a PhD student of the logician Alonzo Church, whom I mentioned earlier. Working with the late Christopher Strachey at Oxford, Scott developed a theory of computation that allows calculations to be analysed, studied, and compared. Scott’s insight was to view computation as a steady increase in information. His development of the mathematical structures now known as Scott domains provided a way of precisely describing this progression. They in turn led directly to an approach for formally describing programs and programming languages — the Scott-Strachey approach to denotational semantics — and indirectly both to approaches to proving programs correct, and to the development of lazy functional programming languages that today form a major strand of computer science research: one to which St Andrews is proud to be making an on-going contribution.

If asked, most computer scientists would agree that denotational semantics forms Scott’s most lasting contribution; they might marvel that, later this year, at the age of 81, he will be delivering a keynote lecture in Vienna at the main international conference on computational logic; and they would probably be able to tell you that he is a recipient of the Turing Award, often referred to as the “Nobel Prize for Computer Science”. However, Scott in fact won the Turing Award, jointly with Michael Rabin, for work on automata theory that predates his work on semantics. In other words, he won the highest accolade his discipline has to offer for work not generally considered to be his most significant. As you might imagine, this is a rather unusual occurrence: in fact, the only other example I can find in the entire history of science is the award of the Nobel Prize to Albert Einstein for work other than his theory of relativity. That’s not bad company to be keeping.

When we think of computers, we often think of their visible manifestations: the internet, mobile phones, aircraft flight control systems, Angry Birds. But no matter how impressive, and how much they continue to change our lives for the better, these systems are possible only because of the foundational intellectual developments that let us reason about proofs, calculations, and computations, as well as simply carrying them out. Vice Chancellor, the work of Dana Scott grounded the discipline of computer science, not only in a specific piece of theory, but also in an approach and a mindset that changed how we think about computing and, through this, has had a profound influence across the whole of human endeavour. It is in recognition of these seminal contributions to science that I invite you to confer upon Professor Dana Scott the degree of Doctor of Science, honoris causa.

Photo here.

(Thanks to Al Dearle, Steve Linton, Lisa Dow, and Muffy Calder for comments that made this better than the first draft I did.)

Flatland: A Romance of Many Dimensions

Edwin A. Abbott


A satire of Victorian society, this little book also manages to be a pretty good introduction to abstract higher geometry. Written from the perspective of an inhabitant of a two-dimensional universe, it features social descriptions, dream sequences into one dimension, a subsequent venture into three dimensions, and the narrator's final coming to terms with his society's inability to believe his insights.

The parallels with Gulliver's Travels are obvious, and Abbott is a better scientist and mathematician than Swift but a less subtle satirist. Having said that, he manages to land some blows: the upper class aversion to "feeling" is probably my favourite, but his treatment of the women of Flatland and the need for (and impact of) wholesale social lying also bring a smile.

Finished on Thu, 12 Jun 2014 00:00:00 -0700.   Rating 3/5.

An Astronaut's Guide to Life on Earth

Chris Hadfield


Part memoir, part self-help book, this is an excellent overview of an astronaut's life and the mental attitudes that have made it possible. Chris Hadfield flew into space three times, and manages to share both the excitement and the boredom and attention to detail that allowed him to successfully become an astronaut and crown his career by commanding the International Space Station.

Finished on Sun, 25 May 2014 10:42:08 -0700.   Rating 4/5.

Research fellowships available in Dublin

Two post-doctoral positions in smart cities now available at Trinity College Dublin.

Research Fellowships in Autonomic Service-Oriented Computing for Smart Cities

Applications are invited for two Postdoctoral Research Fellowships at Trinity College Dublin’s Distributed Systems Group to investigate the provision of a new service-oriented computing infrastructure that provides demand-based composition of software services interacting with a city-wide, dynamic network infrastructure. The project will investigate autonomic adaptation of services and infrastructure, ensuring resilient service provision within an integrated, city-wide system.

Applicants should have a Ph.D. in Computer Science, Computer Engineering or a closely-related discipline and strong C++/C#/Java development skills. Experience with autonomic computing, service-oriented middleware, and/or smart city technologies is desirable as are strong mathematical skills.

The project is supported by Science Foundation Ireland under the Principal Investigator programme between 2014-2018 and will be conducted in collaboration with Cork Institute of Technology, NUI Maynooth, IBM Smarter Cities Research Centre, Intel Intelligent Cities Lab, EMC2 Research Europe, and Arup.  The position is tenable from September 2014.

Please apply by email to quoting “Smart Cities Fellowship” in the subject line. Applications should include a curriculum vitae, in PDF format, giving full details of qualifications and experience, together with the names of two referees. The closing date for applications is the 20th June, 2014.

Trinity College is an equal opportunities employer.

Call for papers: new journal on self-adaptive systems

Papers are welcome for the EAI Endorsed Transactions on Self-Adaptive Systems.

EAI Transactions on Self-Adaptive Systems

Editor-in-Chief: Dr. Emil Vassev, Lero, the Irish Software Engineering Centre, University of Limerick, Ireland


This journal seeks contributions from leading experts from research and practice of self-adaptive systems that will provide the connection between theory and practice with the ultimate goal to bring both the science and industry closer to the so-called "autonomic culture" and successful realization of self-adaptive systems. Both theoretical and applied contributions related to the relevance and potential of engineering methods, approaches and tools for self-adaptive systems are particularly welcome. This applies to application areas and technologies such as:

  • adaptable user interfaces
  • adaptable security and privacy
  • autonomic computing
  • dependable computing
  • embedded systems
  • genetic algorithms
  • knowledge representation and reasoning
  • machine learning
  • mobile ad hoc networks
  • mobile and autonomous robots
  • multi-agent systems
  • peer-to-peer applications
  • sensor networks
  • service-oriented architectures
  • ubiquitous computing

It also hold for many research fields, which have already investigated some aspects of self-adaptation from their own perspective, such as fault-tolerant computing, distributed systems, biologically inspired computing, distributed artificial intelligence, integrated management, robotics, knowledge-based systems, machine learning, control theory, etc.


Manuscripts should present original work in the scope of the journal and must be exclusively submitted to this journal, must not have been published before, and must not be under consideration for publication elsewhere. Significantly extended and expanded versions of papers published in conference proceedings can be submitted, providing also a detailed description of the additions. Regular papers are limited to a maximum of 20 pages. Prepare and submit your manuscript by following the instructions provided here.


Authors are not charged with any publication fees and their papers will be published online with Open Access. Open Access is a publishing model where the electronic copy of the article is made freely available with permission for sharing and redistribution. Currently, all articles published in all journals in the EAI Endorsed Transactions series are Open Access under the terms of the Creative Commons with Attribution license and published in the European Union Digital Library.


  • Christopher Rouff , Johns Hopkins Applied Physics Laboratory, USA
  • Danny Weyns , Linnaeus University, Sweden
  • Franco Zambonelli , UNIMORE, Italy
  • Genaina Rodrigues , University of Brasilia, Brazil
  • Giacomo Cabri , UNIMORE, Italy
  • Imrich Chlamtac , CREATE-NET Research Consortium, University of Trento, Italy
  • James Windsor , ESTEC, European Space Agency, Netherlands
  • Michael O'Neill , UCD, Ireland
  • Mike Hinchey , Lero, the Irish Software Engineering Research Centre, University of Limerick, Ireland
  • Richard Antony , University of Greenwich, UK
  • Simon Dobson , Uni­ver­sity of St Andrews, UK


Stanisław Lem


An atmospheric and episodic tale of first contact. The descriptions are wonderful, and the focus on character is far more detailed than is common even in first-class science fiction. Lem leaves most of the plot elements unfinished, which is somewhat dissatisfying at one level but leaves plenty of space for the reader's imagination to play.

Finished on Wed, 14 May 2014 00:00:00 -0700.   Rating 3/5.

Flash Boys: A Wall Street Revolt

Michael Lewis


Another of Michael Lewis' now-classic tales of Wall Street misadventure, this one focusing on the all-but-unseen - and even less understood - growth of high-frequency trading (HFT). This one follows the efforts of a small group of insiders to create a new stock exchange that's immune both by policy and by design to the arbitrage and strategies HFT uses to game the conventional exchanges. A fascinating list of characters cross the pages, including Russian programmers, disaffected traders, and a network guy from Dublin - all working to make a system that was paying them well disappear in the interests of fairness (and their own long-term financial gain).

At one level this book is less satisfying than The Big Short: Inside the Doomsday Machine, perhaps because the story still hasn't finished. The reader is left wanting to know the fate of the new IEX exchange, and the way the market changed as a result. For a techie, it's also unsatisfying that so much of the technology remains unexplored, although it would obviously have made the book inaccessible to anyone but a computer junkie: perhaps there's a much more technical follow-up that could be written.

Although the story mainly revolves a case of market failure - high-frequency traders capturing huge value while taking no risk and providing no real advantage - it's also in a strange way an example of market success, when Goldman Sachs and other banks realise that their support of HFT is simply too risky for the gains they're capturing themselves. There's also an irony in the banks' worrying that, in the case of another crash, the banks will take the losses while the HFT firms walk away with the gains - which is exactly the reverse of the situation after the 2008 crash, where the public took up the banks' bad debts. Whether this is a sign of things to come is hard to decide, but it does show how even the most dysfunctional system can be changed when people recognise its dysfunction and are prepared to act to remediate it.

Finished on Sat, 03 May 2014 05:31:48 -0700.   Rating 4/5.

Let's teach everyone about big data

Demolishing the straw men of big data.

This post comes about from reading Tim Harford's opinion piece in the Financial Times in which he offers a critique of "big data", the idea that we can perform all the science we want to simply by collecting large datasets and then letting machine learning and other algorithms loose on it. Harford deploys a whole range of criticisms against this claim, all of which are perfectly valid: sampling bias will render a lot of datasets worthless; correlations will appear without causation; the search goes on without hypotheses to guide it, and so isn't well-founded in falsifiable predictions; and an investigator without a solid background in the science underlying the data is going to have no way to correct these errors.

The critique is, in other words, damning. The only problem is, that's not what most scientists with an interest in data-intensive research are claiming to do.

Let's consider the biggest data-driven project to date, the Large Hadron Collider's search for the Higgs boson. This project involved building a huge experiment that then generated huge data volumes that were trawled for the signature of Higgs interactions. The challenge was so great that the consortium had to develop new computer architectures, data storage, and triage techniques just to keep up with the avalanche of data.

None of this was, however, an "hypothesis-free" search through the data for correlation. On the contrary, the theory underlying the search for the Higgs made quite definite predictions as to what its signature should look like. Nonetheless, there would have been no way of confirming or refuting the correctness of those predictions without collecting the data volumes necessary to make the signal stand out from the noise.

That's data-intensive research: using new data-driven techniques to confirm or refute hypotheses about the world. It gives us another suite of techniques to deploy, changing both the way we do science and the science that we do. It doesn't replace the other ways of doing science, any more than the introduction of any other technology necessarily invalidates hat came before. Microscopes did not remove the need for, or value of, searching for or classifying new species: they just provided a new, complementary approach to both.

That's not to say that all the big data propositions are equally appropriate, and I'm certainly with Harford in the view that approaches like Google Flu are deeply and fundamentally flawed, over-hyped attempts to grab the limelight. Where he and I diverge is that Harford is worried that all data-driven research falls into this category, and that's clearly not true. He may be right that a lot of big data research is a corporate plot to re-direct science, but he's wrong to worry that all projects working with big data are similarly driven.

I've argued before that "data scientist" is a nonsense term, and I still think so. Data-driven research is just research, and needs the same skills of understanding and critical thinking. The fact that some companies and others with agendas are hijacking the term worries me a little, but in reality is no more significant than the New Age movement's hijacking of terms like "energy" and "quantum" -- and one doesn't stop doing physics because of that.

In fact, I think Harford's critique is a valuable and significant contribution to the debate precisely because it highlights the need for understanding beyond the data: it's essentially a call for scientists to only use data-driven techniques in the service of science, not as a replacement for it. An argument, in other words, for a broadly-based education in data-driven techniques for all scientists, and indeed all researchers, since the techniques are equally (if not more) applicable to social sciences and humanities. The new techniques open-up new areas, and we have to understand their strengths and limitations, and use them to bring our subjects forwards -- not simply step away because we're afraid of their potential for misuse.

UPDATE 7Apr2014: An opinion piece in the New York Times agrees: "big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement." The number of statistical land mines is enormous, but the right approach is to be aware of them and make the general research community aware too, so we can use the data properly and to best effect.