Chief scientific advisor for Ireland

I'm a little perplexed by the direction of the current discussion over Ireland's chief scientific advisor.

The need for scientific advice, as a driver of (and sanity check upon) evidence-based policy formation, has arguably never been greater. Certainly many of the challenges we face living in the 21st century are, directly or indirectly, influenced by a sound understanding of both the science and its limitations. This is why it's attractive for governments to have dedicated, independent scientific advisors.

We need first to be clear what a good chief scientific advisor isn't, and that is an oracle. The chief scientist will presumably be a trained, nationally and internationally respected practitioner who brings to the job an experience in the practice of science but also in its wider analysis and impact. In many respects these are the qualities one looks for in a senior academic -- a full professor -- so it's unsurprising that chief scientists are often current or former (emeritus) professors at leading institutions. They will not be expert on all the areas of science required to address any particular policy question -- indeed, they might never be expert in the details pertaining to any question asked -- but they can act as a gateway to those who are expert, and collate and contrast the possibly conflicting views of different experts who might be consulted in each case. They provide a bridge in this sense between the world of the research and the world of policy, and will need to be able to explain clearly the basis and consequences of what the science is saying.

Ireland has recently abandoned having a chief scientist as an independent role, and has instead elected to combine it with the role of Director of Science Foundation Ireland, the main advanced research funding agency. There are several stated reasons for this, most centring on the current resource constraints facing government spending.

I don't think this structure is really consistent with the role of chief scientist, nor with the principles of good governance.

To avoid any misunderstandings, let's be clear that this has nothing to do with the individuals involved: the current or former directors of SFI would be eminently suited to be a chief scientist in their own right. However, having the SFI director fill this additional role ex officio seems not to be best practice. The concern must be that the combined role cannot be independent by its very nature, in that the scientific direction of SFI may wholly or in part be involved in the policy decisions being made as a consequence of advice received. If this occurs, the chief scientist is then recommending actions for which the director must take responsibility, and the perception of a confusion of interest is inevitable if these roles are filled by the same individual. To repeat, the integrity of the office-holders is not the issue, but rather a governance structure that conflates the two roles of execution and oversight.

If resource constraints are really the issue, one might say that the chief scientist does not need to be independently employed to be independent in the appropriate sense. The chief scientific advisory roles in the UK, for example, are typically filled by academics on part-time release from their host institutions. They collate and offer scientific advice, possibly made better by the fact that they remain active researchers and so remain current on both the science and its practice, rather than being entirely re-located into the public service. (The chief scientific advisor for Scotland, for example, remains a computer science researcher at the University of Glasgow in addition to her advisory role.) The risk of confusion is significantly less in this structure, because a single academic in a single institution does not exert executive control or influence over wider funding decisions. Moreover the individual remains employed by (and in the career and pension structure of) their host university and is bought-out for part of their time, which reduces the costs significantly. It also means that one can adjust the time commitment as required.

I thought when the post was first created that it was unusual that the Irish chief scientist's post was full-time: requiring the (full-time) SFI director to find time in addition for the (presumably also full-time) duties of chief scientific advisor is expecting a lot.

It is of course vital for the government to be getting good science advice, so it's good that the chief scientist role is being kept in some form. But I think it would be preferable to think about the governance structure a little more, to avoid any possible perception of confusion whether or not such confusion exists in practice.

Thinking, Fast and Slow

Daniel Kahneman

2011


I'm slightly conflicted about this book (and I'm writing this review without having quite finished it). Some of the insights are quite astounding, and the basic premise -- that people are poor decision-makers, and that this basic fact needs to be reflected in all our social, economic and scientific systems -- is one that's well worth our taking on board.

As a scientist, I spend time working within and developing systems of enquiry. There's a temptation to think that good people working with goodwill will generate accurate, honest and balanced results. The message from this book is that this isn't at all correct: ability and integrity are insufficient, and we need to design our systems with this in mind. What this means for practical scientific enquiry is unclear. For example, is peer review the best system for assessing research results? How about for research funding proposals? Does bibliometrics a better rear-view mirror of quality than human assessment? All these are valid questions thrown-up by a more behaviourist perspective on human enterprise.

However, I did feel that the book itself is very laboured. In fact I feel the same way about it as I felt about books like The End of History and the Last Man and The Tipping Point: How Little Things Can Make a Big Difference: that they're basically essays that have been extended to book form, and struggle to maintain that length. A more concise presentation would have omitted a lot of supporting evidence, obviously, but whether that would weaken the appeal of a popular science book is questionable, and more exploration of the consequences and potential mitigations of the science would perhaps have made for a better read.

It's also slightly annoying that only American universities seem worthy of being named: research done elsewhere is attributed to "a British university" or "a German university". I know the author is at a US institution, but science is an international endeavour that deserves better treatment.

Finished on Wed, 14 Nov 2012 12:33:45 -0800.   Rating 3/5.

Gödel, Escher, Bach: an Eternal Golden Braid

Douglas R. Hofstadter

1979


This is a book that's either loved or hated. It has a cult following amongst mathematicians and computer scientists: Hofstadter himself is a respected computer scientist with an interest in biological modelling.

The "plot", such as there is, revolves around explaining Godel's incompleteness theorem: the idea that any formal mathematical system has inherent limitations and inconsistencies. Godel incompleteness is to mathematics what Turing computability is to computing: a bound on ambition, a constraint that prevents some problems from being explored. The former perhaps has less significance to everyday life than the latter, but still remains one of the cornerstone discoveries of 20th century science.

Hofstadter also zones in on the familiar discussions about art and mathematics, the appeal that recursive, self-reflecting art works (like those of Escher and Bach) have to the mathematically inclined. The weaving of these themes within the book is quite astonishing, and at times illuminating. However, it does mean that this is not a book one can dip into: it requires concerted and prolonged effort, which it only partially repays.

The scope is both impressive wide and restrictively narrow, with the reader emerging with an understanding of a problem whose everyday relevance can be questioned, but also with an exposure to a wide range of aesthetic and scientific problems that he might otherwise never consider.

Finished on Sun, 11 Nov 2012 01:56:31 -0800.   Rating 3/5.

Excession (Culture, #5)

Iain M. Banks

1996


Without a doubt my favourite book (so far) in the Culture series, a book I can read again and again. The premise that one can rationalise oneself into evil acts is only made stronger by having it apply to Minds as well as people!

As usual Iain Banks manages to full all the characters, places and species in this book with convincing detail and lovingly-crafted back-stories. He also manages his trick of providing paragraphs-long digressions that add immensely to the texture of the writing without getting in the way of the plot.

Finished on Sat, 10 Nov 2012 07:05:03 -0800.   Rating 5/5.

Broken Harbour (Dublin Murder Squad, #4)

Tana French

2012


This is perhaps the first real piece of post Celtic Tiger fiction, and is certainly a strong start. The descriptive text is absolutely lovely, and brings out a world familiar to anyone who's visited a town in rural Ireland since the crash.

The plot is well-paced and twisted, and the characters are all sympathetic and beautifully observed. But the landscape is a major character, especially the ghost estate of Brianstown but also the small apartments and suburban houses of the other characters. It's a novel that really feels Irish in terms of place as well as people. The use of the internet feels real too, unlike so many books that try (and fail) to be both socially and technically correct about computers and crime.

My only real criticism is that some poor copy-editing let through some jarring non-Irish expressions that should really have been caught: "muffler" instead of silencer, "airplane" instead of plane, for example. These jump out precisely because the text is so flowing and crisp.

Finished on Wed, 31 Oct 2012 00:00:00 -0700.   Rating 5/5.

Parser-centric grammar complexity

We usually think about formal language grammars in terms of the complexity of the language, but it's also possible -- and potentially more useful, for a compiler-writer or other user -- to think about them from the perspective of the parser, which is slightly different.

I'm teaching grammars and parsing to a second-year class at the moment, which involves getting to grips with the ideas of formal languages, their representation, using them to recognise ("parse") texts that conform to their rules. Simultaneously I'm also looking at programming language grammars and how they can be simplified. I've realised that these two activities don't quite line up the way I expected.

Formal languages are important for computer scientists, as they let us describe the texts allowed for valid programs, data files and the like. The understanding of formal languages derives from linguistics, chiefly the work of Noam Chomsky in recognising the hierarchy of languages induced by very small changes in the capabilities of the underlying grammar: regular languages, context-free and context-sensitive, and so forth. Regular and context-free languages have proven to be the most useful in practice, with everything else being a bit too complicated for most uses.

The typical use of grammars in (for example) compilers is that we define a syntax for the language we're trying to recognise (for example Java) using a (typically context-free) grammar. The grammar states precisely what constitutes a syntactically correct Java program and, conversely, rejects syntactically incorrect programs. This parsing process is typically purely syntactic, and says nothing about whether the program makes sense semantically. The result of parsing is usually a parse tree, a more computer-friendly form of the program that's then used to generate code after some further analysis and optimisation.

In this process we start from the grammar, which is generally quite complex: Java is a complicated system to learn. It's important to realise that many computer languages aren't like this at all: they're a lot simpler, and consequently can have parsers that are extremely simple, compact, and don't need much in the way of a formal grammar: you can build them from scratch, which would be pretty much impossible for a Java parser.

This leads to a different view of the complexity of languages, based on the difficulty of writing a parser for them -- which doesn't quite track the Chomsky hierarchy. Instead you end up with something like this:

Isolated. In languages like Forth, programs are composed of symbols separated by whitespace and the language says nothing about the ways in which they can be put together. Put another way, each symbol is recognised and responded to on its own rather than as part of a larger construction. In Java terms, that's like giving meaning to the { symbol independent of whether it's part of a class definition or a for loop.

It's easy to see how simple this is for the compiler-writer: we simply look-up each symbol extracted from the input and execute or compile it, without reference to anything else. Clearly this only works to some extent: you only expect to see an else branch somewhere after a corresponding if statement, for example. In Forth this isn't a matter of grammar, but rather of compilation: there's an extra-grammatical mechanism for testing the sanity of the control constructs as part of their execution. While this may sound strange (and is, really) it means that it's easy to extend the syntax of the language -- because it doesn't really have one. Adding new control constructs means adding symbols to be parsed and defining the appropriate control structure to sanity-check their usage. The point is that we can simplify the parser well below the level that might be considered normal and still get a usable system.

Incremental. The next step up is to allow the execution of a symbol to take control of the input stream itself, and define what it consumes itself. A good example of this is from Forth again, for parsing strings. The Forth word introducing a literal string is S", as in S" hello world". What happens with the hello world" part of the text isn't defined by a grammar, though: it's defined by S", which takes control of the parsing process and consumes the text it wants before returning control to the parser. (In this case it consumes all characters until the closing quotes.)  Again this means we can define new language constructs, simply by having words that do their own parsing.

Interestingly enough, these parser words could themselves use a grammar that's different from that of the surrounding language -- possibly using a standard parser generator. The main parser takes a back seat while the parser word does its job, allowing for arbitrary extension of the syntax of the language. The disadvantage of course is that there's no central definition of what constitutes "a program" in the language -- but that's an advantage too, certainly for experimentation and extension. It's the ideas of dynamic languages extended to syntax, in a way.

Structured. Part of the subtlety of defining grammars is avoid ambiguity, making sure that every program can be parsed in a well-defined and unique way. The simplest way to avoid ambiguity is to make everything structured and standard. Lisp and Scheme are the best illustrations of this. Every expression in the language takes the same form: an atom, or a list whose elements may be atoms or other lists. Lists are enclosed in brackets, so it's always possible to find the scope of any particular construction. It's extremely easy to write a parser for such a language, and the "grammar" fits onto about three lines of description.

Interestingly, this sort of language is also highly extensible, as all constructs look the same. Adding a new control construct is straightforward as long as it follows the model, and again can be done extra-grammatically without defining new elements to the parser. One is more constrained than with the isolated or incremental models, but this constraint means that there's also more scope for controlled expansion. Scheme, for example, provides a macro facility that allows new syntactic forms, within the scope and style of the overall parser, that nevertheless behave differently to "normal" constructs: they can capture and manipulate fragments of program text and re-combine them into new structures using quasi-quoting and other mechanisms. One can provide these facilities quite uniformly without difficulty, and without explicit syntax. (This is even true for quasi-quoting, although there's usually some syntactic sugar to make it more usable.) The results will always "look like Lisp", even though they might behave rather differently: again, we limit the scope of what is dealt with grammatically and what is dealt with programmatically, and get results that are somewhat different to those we would get with the standard route to compiler construction.

This isn't to say that we should give up on grammars and go back to more primitive definitions of languages, but just to observe that grammars evolved to suit a particular purpose that needs to be continually checked for relevance. Most programmers find Java (for example) easier to read than Scheme, despite the latter's more straightforward and regular syntax: simplicity is in the eye of the beholder, not the parser. But a formal grammar may not be the best solution in situations where we want variable, loose and extensible syntax for whatever reason, so it's as well to be aware of the alternative that make the problem one of programming rather than of parsing.