The Sandman, Vol. 7: Brief Lives

Neil Gaiman (1993)

5/5. Finished Thursday 6 March, 2014.

(Originally published on Goodreads.)

How to Thrive in the Digital Age

Tom Chatfield (2012)

A short exploration of the state of modern internet and social experience. In some ways the book is mis-named, in that it isn’t in any way prescriptive or suggestive of how one should thrive, but rather illustrates some of the issues one should consider in order to: such issues as privacy, time away, imaginary vs real experience, and the like. Definitely worth a read, and with an excellent bibliography pointing to further information.

3/5. Finished Tuesday 25 February, 2014.

(Originally published on Goodreads.)

The Simpsons and Their Mathematical Secrets

Simon Singh (2013)

A good book for relaxing on a plane. The book is structured by highlighting and explaining some of the hidden jokes in The Simpsons that derive from the writers’ unexpectedly mathematical and scientific backgrounds. Actually the book is broader than its title suggests, as it also covers The Simpsons‘s sister show (and my own favourite) Futurama.

There are some excellent explanations of some excellent gags. Without giving too much away, perhaps the most surreal moment is where a plot device for an episode of Futurama requires the writers to develop a new theorem in order to get the storyline to work out: the only known case of a sitcom giving rise to new mathematics.

The most reflective chapter is the “Eπlogue”, where the author explores with the writers whether they have any regrets leaving their mathematical careers behind for comedy. David X. Cohen, who became the main writer on Futurama, muses whether he’s had more influence in spreading science as a writer than he would have had as a researcher. I think this is a noble observation, and one that can be made about other “entertainers in science” such as Jorge Cham of PhD Comics: no matter how successful they might have been as scientists, their work has inspired a generation.

3/5. Finished Thursday 20 February, 2014.

(Originally published on Goodreads.)

No-cost journals

What have scientific journals ever done for us? And can we get the benefits without the access issues? “Open access” is a big thing in scientific publishing these days. The UK research councils, who fund a large fraction of the UK’s academic research, have decided that papers arising from their research have to made available to any interested reader at no charge. The argument is that publicly-funded research results are a public good, and that other researchers should not be impeded in building on results. Since science progresses by researchers building on each others’ work, there is plenty of justification for this view. You would think that open access wouldn’t be a problem in these days of personal web pages and Google. However, when publishing a paper in a major journal, the authors typically sign away their copyright to the journal publisher, meaning that they can’t legally make the paper freely available. The publishers in turn lock the papers away, either in dead-tree form (which they then sell to university libraries at exorbitant cost) or behind paywalls requiring individual or institutional subscription. The journals who do this are often the most important and prestigious venues, places where you want your work to appear, and scientists aren’t going to stop publishing in these places any time soon. To address open access, some journals have started charging open access fees, whereby an author can pay to have their article made open access (i.e., to appear outside the paywall). Of course, anyone funded by a UK research council basically has to pay these fees to be compliant with their funding. Effectively, though, it means that institutions typically pay twice for publication: they pay the open access fee for individual articles, but still need to subscribe to the paid-for journal to get access to other papers. There are also open access journals that charge a publication fee for each accepted paper, but these are still quite new and with some exceptions (most notably PLoS) still fairly low-grade. These issues got me thinking: what do the journals actually give us? And could we get the benefit using internet technology without the costs? Historically journals served as the primary means of academic communication, but clearly that time has passed. Nowadays journals give us two things:

  1. An editor and editorial board acting as guardians of the quality of the papers accepted. As a general rule, you never publish in a journal where you don’t recognise any of the names on the editorial board: you want a journal managed by people known in your field.
  2. A brand that gives readers confidence that this journal will contain significant work that justifies the time spent reading it.
  3. Persistent storage of articles to give confidence that they can be found, referenced, and accessed in the future.
Clearly (2) is a function of (1), in the sense that the brand is built by the demonstrated consistently by the editorial board. It typically takes time to develop for a new journal. As to (3), persistent storage isn’t much of a problem these days, but finding a copy of a paper could well be. In building our now-cost journal, we therefore need to replicate (1) and (3) in order to build (2). Here’s a possible workflow. We establish the journal’s web site: the St Andrews Journal of Interesting Things, perhaps. Like most journals, this allows prospective authors to submit their manuscripts, which are passed to the editorial board for review. Academics typically serve on editorial boards for free. They are self-organising, in the sense that the editor-in-chief (EIC) appoints a set of trusted lieutenants consisting of his friends, colleagues, and people well-known in the field of the journal. Doing this well is a major skill — but an individual one, dependent on the selection of a good EIC. The editorial board are assigned incoming papers, which they ask their friends and colleagues to review and provide comments one. Again, the selection of reviewers is critical to the quality of papers, as the reviewers are expected to assess the work presented and to suggest changes (or reject the paper completely). Academics typically review for free, too, so you’ll notice that, for a typical journal, the total cost so far has been running the web server that manages the editorial process. Papers typically go around one or two rounds of revision before being accepted and published. The problem here is that we need to show readers that the paper has passed through the quality assurance of review. Anyone can put an article on the web, but journals guarantee that the work has been looked at and approved by the authors’ scientific peers. (This doesn’t guarantee that journal-published work is correct, merely that it’s sufficiently convincing to a suitably qualified set of reviewers. There is always a steady stream of corrections, retractions, and withdrawals as flaws are found in work post-publication.) In a paid-for journal, the guarantee comes from printing on paper: you can check whether a paper purporting to appear in a journal actually does so by checking the appropriate volume. This of course implies printing and distribution, which publishers claim is source of their need for fees. For the St Andrews Journal of Interesting Things we want to avoid this cost. Actually, this is technically straightforward in a number of ways. The complicated way is to create a machine-readable metadata file containing the paper’s title, authors, abstract, journal reference, associate editor in charge, and maybe some other details. We then bind this file cryptographically to the final (“camera-ready”) manuscript. The cryptography guarantees two things: that the binding was done by the journal editor, and that neither file has been changed since being bound together. Anyone downloading the file bundle can then check that the metadata and manuscript match, and therefore knows that the paper is the one “published”. (The simple way is to add a header to the manuscript text and then cryptographically sign the resulting file. This is trivially accomplished using a tool like Adobe Acrobat Pro, but is less attractive than the metadata approach because the header isn’t machine-readable, making it harder to index the paper.) There is no cost associated with either of these approaches. We can then give the signed file back to the authors and tell them to place it on any web server that Google will index. This will let anyone searching for the file to find it: that’s what search engines do. If we want to be really thorough, we would keep track of where the files are stored, and/or perform regular searches to locate them (easy enough given machine-readable metadata), and maintain a journal web page listing the published papers and linking to them. (Total cost: one web page.) If we want to be really thorough we can mint Digital Object Identifiers that resolve through our web server to the paper locations. (Total cost: a small database and a single CGI script on our web server.) We’ve now recreated the publication side of the journal industry, essentially for free. This leaves the branding issue. There are two sides to establishing a brand: quality and visibility. The quality of the product, as mentioned above, relies on the selection of editorial and review teams and their willingness to serve at no cost, as is normal in academic publishing. The visibility issue is harder to crack, but could be addressed using the web, by viral marketing and appearances at conferences that editorial board members were attending, by word of mouth through the research community — and even by advertising in the paid-for journals themselves, possibly. One great thing about the web and social media is that word will get out: after that, it’s a matter of the quality of papers accepted and the willingness of authors to contribute. I’m not actually planning on setting up a new journal. The point is that 21st century research doesn’t need the friction and costs imposed by journals whose main editorial services are provided free by their consumers anyway. We should be able to do away with these costs without sacrificing the quality of material that we read or the reliance we place upon it.

Mao’s Great Famine: The History of China’s Most Devastating Catastrophe, 1958-62

Frank Dikötter (2010)

A hard book to read, detailing the effects of the Great Leap Forward on the people of China, especially in the countryside. The parallels with other Communist states are striking: the bureaucracy, the persistent raising of production targets, and the ubiquitous lying as to how those targets have been exceeded everywhere despite the obvious facts on the ground. But there are unique features too. Two stand out in particular. Firstly, the use of particular countries as targets to exceed in particular commodities (i.e., beat Britain in iron production) for no readily apparent reason. Secondly, the very fact that amid the desire to increase food production, and the famine that resulted as this campaign was mis-managed, several other campaigns were instituted such as backyard iron smelting and water conservation that all interfered so as to guarantee their mutual total failure. It’s hard to place yourself into the mindset of any government being able to distance itself so completely from reality as to imagine this approach could work even in theory, and then to further be able to ignore the facts so comprehensively.

Dikötter’s book on the takeover of power (The Tragedy of Liberation: A History of the Chinese Revolution 1945-1957) forms a trilogy with this and his next work on the Cultural Revolution. When finished the three will be indispensable as a guide to this period.

5/5. Finished Sunday 9 February, 2014.

(Originally published on Goodreads.)