Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It

Erica Thompson (2022)

A thoughtful look at modelling by an experienced climate modeller.

What are models for? The most common answer would be “to predict the future behaviour of some system,” but Thompson argues a far more subtle line: that the most important models often fail to be predictive in any real sense. Much of this is down to problems of validation, especially in climate models for which we have no experience of the world the models are trying to predict.

An even more subtle mistake is regarding all models as “cameras” that simply observe the world. That’s true for the more abstract kinds of modelling, where one is trying to understand possible behaviours of systems in general without tying them to specific circumstances. But the models with which most people are familiar act ore like “engines” that can perturb the system they’re purporting simply to observe by baing used as drivers for policy. Climate and epidemic models seek to warn as well as predict and understand, but this exacerbates the problems of validation: if the model’s predictions don’t come to pass, perhaps this is because policy-makers took corrective actions in response, or maybe bacause they didn’t intervent effectively enough. This isn’t a reason to give up on modelling altogether: how else are we to understand complex systems, and how else are we to respond rationally to them? But it does mean that the notion of “following the science” problematic.

Thompson also wrestles with the problem of groupthink amongst modellers, who often share a common overallping background. I agree this is a problem, but the idea that we can increase diversity in the community easily seems flawed to me. Modellers share a scientific viewpoint and a belief in modelling, and no-one who doesn’t will ever be able to effectively engage with the models or their arguments. Perhaps it’s enough that scientists are always advisors and never decision-makers, and allow politicians to deal with the integration of different choices and values – although that split isn’t always appreciated by the public, and is often (as in the covid-19 pandemic) deliberately blurred to allow less-trusted politicians to draw credibility from more-trusted scientists and doctors.

Overall I think this is a lucid and valiant attempt to summarise and explore the benefits and limitations of models, and science in general, when it impacts directly on the wider world. It deserves to be widely read in the scientific community so that we can better understand our place in policies that we often unavoidably have to influence.

4/5. Finished Friday 22 March, 2024.

(Originally published on Goodreads.)

How Lisp is designing nanotechnology

How Lisp is designing nanotechnology

How Lisp is designing nanotechnology (video, 52 minutes).

An interview on the Developer Voices podcast with Prof Christian Schafmeister on designing enzymes using a custom dialect of Lisp to control computational chemistry libraries.

He initially started from an idea based on CAD applications (written in Smalltalk), but rapidly realised there were too many possible components and combinations to work with graphically and so started treating it as a language-design problem – and designed a custom Lisp based on ECL. Lisp provides simplicity and efficiency, as well as being a “forever” language in which software keeps working over the long term.

A micro-manual for Lisp: Not the whole truth

A micro-manual for Lisp: Not the whole truth

John McCarthy. A Micro-Manual for Lisp: Not the Whole Truth. ACM SIGPLAN Notices 13, pp.215–216. 1978.

A “manual” in the sense of providing a complete implementation of Lisp – in Lisp itself.

Is that useful? A semantics expert would say “no”, since such a model has no real explanatory power: the recursion doesn’t bottom-out to define the concepts. However, a programming language expert might disagree and observe that this is meta-linguistic abstraction, the use of one high-level language to capture the definition of another. The “host” language provides a higher level of abstraction than is usual when writing interpreters and compilers, so it becomes easier to experiment with different forms of “target” language as there’s less low-level work to do. This benefit exists even when host and target are the same: essentially the target is bootstrapped and can then be adjusted while running within an earlier version of itself.

As far as I know this is the first example of meta-linguistic abstraction in the computer science literature. The idea was popularised by SICP (using Scheme), and forms the basis for a lot of modern bootstrapped compilers.

The Lisp machine

The Lisp machine

Richard Greenblatt. The Lisp Machine. Working paper 79. MIT AI Laboratory. 1974.

A description of the architecture later built and sold by LMI.

A visionary description of a machine to run Lisp at a “non-prohibitive cost” of $70,000 per system. (The web tells me that this is the equivalent of $440,000 in 2024.) A completely integrated system based on a Cons processor and using a PDP-11 as a console. A relocating, compacting, garbage collector with invisible forwarding pointers. Non-linear spaghetti stacks to support advanced programming constructs.

One of the many notable facets of this design is one of its proposals (the “frame pushdown list”) to avoid excessive conses and garbage collection overhead by maintaining lists of blocks of particular sizes. It uses this approach to manage the creation of call stack frames; modern Python implementations use this scheme throughout to avoid heap fragmentation through allocating and re-allocating blocks of different sizes. (Lisp has this feature generally, of course, with its use of fixed-size cons cells.)

It’s also fascinating to see that, even at this early stage, hardware design was being seen as an exercise in refinement to maintain software-level abstractions even as features were being pushed into hardware – while still providing optimised implementations that were invisible to the programmer.

Lisp for the web

Lisp for the web


Adam Tornhill. Lisp for the Web. Leanpub. 2015.

A short, practical guide to build a dynamic web site entirely using Common Lisp. Makes extensive use of several libraries, including the intriguingly-named hunchentoot web server (named after a never-staged musical written by Frank Zappa), the CL-WHO HTML generator, and – perhaps most interesting – the Parenscript Lisp to Javascript compiler.

One gain from using Lisp is the ability to wrap-up standard constructions like page templates as macros. Other than this, it’s hard to assess whether the gains are all that significant. One interesting point is that the whole development occurs in Lisp, so there’s no cognitive dissonance between writing the back-end and the front-end: essentially the same argument for using node.js as a back-end framework. While Parenscript isn’t a fully-featured Lisp-in-the-browser it again can benefit from macros and other features. There’s a throwaway comment about linking it to jQuery, although that isn’t demonstrated in practice: it feels intriguing, though, especially if there’s a way to represent jQuery functionality idiomatically in Lisp.