The Emergence of Numerical Weather Prediction: Richardson’s Dream

Peter Lynch (2006)

A very technical examination of the world’s first numerical weather “prediction” – although in fact it was really a “postdiction”, taking detailed data and using it to compute a scenario that could be compared against a known ground truth. It was an incredible achievement anyway, performing manually and at low resolution the calculations now routinely performed by computers.

Richardson was the person who saw that this would be possible, realising that the physics and mathematics could be solved even though the computational capabilities didn’t exist. In this he foresaw the emergence of the modern power of data, where the existence of more and better data transforms both the way we do science and the sciene that we do. It’s something that should put him alongside Turing and Von Neumann as visionaries of what computation could achieve.

4/5. Finished Sunday 21 April, 2024.

(Originally published on Goodreads.)

TIL: Cognitohazards

TIL: Cognitohazards

Could social media posts be actively damaging to our mental health? – literally, not just figuratively? That’s the premise of a TechScape article in The Guardian, that draws on both science fiction and psychological research.

In Neal Stephenson’s “Snow crash” there is a plot device of an image in a metaverse that, when viewed, crashes the viewer’s brain. We haven’t seen this in social media (yet), but there’s an increasing concern about deepfake images and other forms of misinformation. Research suggests that such images are damaging even if viewers know that they’re fakes, which suggests that techniques like content-labelling images as AI-generated are insufficient to remove their harm. Other examples include massively engaging artificial images such as the “pong wars” animation of two simultaneous “Breakout” games going on between two algorithms: something that shouldn’t be as engaging as it is (as I can attest to myself).

Social media attention grabbing at an industrial scale might therefore constitute a cognitohazard, a way of hacking people’s brains simply by being viewed.

The Great Post Office Scandal: The fight to expose a multimillion pound IT disaster which put innocent people in jail

Nick Wallis (2021)

The history of what is still as I write this an ongoing scandal: perhaps the greatest miscarriage of justice in UK history, when the Post Office turned on and prosecuted its own operators for fraud on the basis of flawed evidence from its Horizon IT system.

The scandal came from a series of interlocking arrangements. The Post Office could mount prosecutions in its own right (something that its management now claims to have been unaware of), with its own investigators able to interview under caution but seemingly not required to record interviews or allow legal representation. The contracts under which postmasters operated were mis-represented to turn any discrepancies into the postmasters’ fault, while simultaneously denying them the data they’d need to investigate themselves unless they were exceptionally careful and persistent – and the few who were were prosecuted and driven out anyway.

As a computer scientist I have to say that the most ridiculous part of the story is the presumption in law that a computer system works properly absent explicit to the contrary, that – when coupled with an asymmetry of who can access to software and its logs – makes it impossible to fight against computer-generated evidence. No system of this size ever works flawlessly, and a careful designer would add features on that assumption, such as paper-trails or detailed logs that can then be forensically analysed. Equally, no such system would ever be constructed without remote access to terminal endpoints to allow problems to be remotely fixed, and again a careful designer would ensure that such interventions were logged so as to be auditable. This is so self-evident that I find it hard to believe that Horizon wasn’t designed that way, which makes the repeated denials even more suspicious.

But there’s a social commentary here too. The justice system, juries, and the communities of the prosecuted postmasters were almost eager to believe their guilt. Part of that was the lack of proper media scrutiny to bring the extent of prosecutions into the public eye, and part due to obfuscation by the Post Office itself – but it surely there should be systemic oversight of such things, given the massive consequences. And surely communities should have been less ready to believe accusations made against their own members that went against the evidence of their own eyes.

4/5. Finished Sunday 7 April, 2024.

(Originally published on Goodreads.)

Loving Common Lisp, or the savvy programmer’s secret weapon

Loving Common Lisp, or the savvy programmer’s secret weapon

nil

Mark Watson. Loving Common Lisp, or the Savvy Programmer’s Secret Weapon. Leanpub. 2023.

While pitched as a way of sharing the author’s enthusiasm for Lisp (which really shines through), this book is really a deep demonstration in using Lisp in modern applications – from web APIs and the semantic web to deep learning, large language models, and chatbots.

In some ways, like many other Lisp books, it’s really two books in one. The first chapters are introductory – and to be perfectly honest could be dispensed with, as they’re inadequate as a proper introduction and there are far better introductions out there. The later chapters focus on applications, and provide the real value. One could criticise them as often tying-together tools in other languages, with the Lisp code basically being glue; but that’s a very effective way of leveraging all the code and services out there, and is an important technique for Lisp programmers too.

Lisp in space

Lisp in space

Lisp in space (podcast, 38 minutes)

An interview on the Corecursive podcast with Ron Garret.

In 1988 (when, for context, I was in the second year of my BSc) Garret started working on autonomous navigation software for Sojourner, NASA’s first Mars rover, which flew in 1997. He used Lisp to do planning, essentially developing an entire domain-specific language for autonomous vehicles. The project was never flown, as NASA opted for a far less ambitious approach to driving to rovers – a decision that Garret now considers to have been the correct one.

But that isn’t the end of the story, because Garret then went on to develop an autonomous Remote Agent controller for the Deep Space 1 technology demonstration mission that performed asteroid and comet fly-bys. Without spoiling to story, the spacecraft flies with a full Lisp system onboard, and Garret gets to interact with its REPL at a distance of 30 light-minutes via the Deep Space Network – surely the longest latency of any REPL session ever!

It’s an fascinating insight into both the potentials of Lisp and the political difficulties that using a non-standard development language can engender.

UPDATE 2024-05-16: Ron also wrote a short essay about his experiences.