Doppelganger: A Trip into the Mirror World

Naomi Klein (2023)

What does it mean for someone to have a double? That is the broad premise of this book, and its a lot broader than it first appears.

Klein’s doppelganger is another author, Naomi Wolf. Both began as liberal and feminist darlings, before “Other Naomi” (Wolf) became a proponent of various consipracy theories and hanging out with right-wing influencers. The two Naomis starte being confused with each other within social media, to the estent that Klein almost starts to lose her own sense of identity and becomes increasingly obsessed with tracking the confusion. (I was darkly amused to check Wolf out on Wikipedia and find that the page starts “Not to be confused with Naomi Klein” – and vice versa for Klein’s entry.)

There’s a certain feeling of narcissism in the account. Maybe Klein is affected so much because she and Wolf both have such curated on-line personalities, so that when Klein’s is intruded upon it feels like a personal attack. Would someone with less commitment to media be so affected? – well, I have to say I hope I never personally find out, because it comes across as a very destabilising experience.

Klein does take a broader view of the pheonomenon of “doubling”, which at one level feels unnecessary ad extraneous to this book but that I found quite fascinating in its own right: the idea that doubles appear in lots of circumstances for those with (and even without) a life in the public eye. For an author it’s natural to be worried about how the “double” that is one’s written work is interpreted and re-interpreted after it’s been published, without your control, and how this spills-back on to how people interpret everything you write from then on. Some of the examples feel a little contrived, others quite plausible.

So this book is both a personal memoir and a deeper exploration of a person’s relationship to their own work and life. Both sides are valuable and well worth reading.

4/5. Finished Sunday 2 March, 2025.

(Originally published on Goodreads.)

Tutorial on good Lisp programming style

Tutorial on good Lisp programming style

https://www.cs.umd.edu/%7Enau/cmsc421/norvig-lisp-style.pdf

Expression + Understanding = Communication

A great discussion of style, perhaps the most elusive programming skill. Illustrated across all the levels of abstraction that one finds in Lisp: of data, functions, control, and syntax (the last being essentially unique to Lisp macros). The discussion of control abstraction deals extensively with catch/throw and other primitives in the condition system that are dealt with at greater length in The Common Lisp condition system.

The extended example (starting on page 71) of how to simplify logical expressions is amazingly clear, and really shows the progression from simple functions to embedded languages targeting specific tasks and making them easy to describe and extend to new applications: something that lies at the heart of the Lisp experience.

Much of the advice is actually language-agnostic, even though the concrete examples are in Lisp. Norvig, of course, is the author of Paradigms of artificial intelligence programming, and has made enormous contributions to the theory and practice of Lisp.

Lisp: Style and design

Lisp: Style and design

nil

Molly Miller and Eric Benson. Lisp: Style and Design. Digital Press. ISBN 978-0135384220. 1990.

A book that would serve as a primer for someone tackling a significant piece of programming for the first time.

The style is a bit stiff and occasionally slightly patronising, definitely positioned as being from senior to junior programmers. The depth of the material is variable: I found the treatment of macros quite superficial, not helped by the examples generating questionable code. It also places relatively little emphasis on CLOS and generic functions, which would get more space in a more modern treatment..

The best chapters are those on debugging and (especially) performance engineering, which dig into the interactive tools generally available within Lisp and give a good end-to-end description of the use of declare forms to aid compiler optimisations.

But again the book’s age shows. It predates the obsessive relationship that many modern programmers have with unit testing and test automation, treating testing as an interactive activity alongside debugging rather than as a core and permanent part of program development and maintenance.

Human Compatible: Artificial Intelligence and the Problem of Control

Stuart Russell (2019)

A book that tries to address the emergence and implications of artificial intelligence.

(I should preface what I’m going to say by noting Brand’s enormous contributions to AI research and teaching. He knows massively more about these systems than I do.)

Computer scientists often avoid the term “AI” because it’s so broad and encompasses a huge range of approaches, from constraint-solving and pattern-matching to speech recognition and the sorts of “chatbot” technology that is the common face of modern AI – technically referred to as Large Language Models or LLMs, that have been trained on text scraped from the internet and can perform impressive feats of conversation.

There’s a problem when discussing AI that you have to be very careful about, and that’s the tendency to anthropomorphise. It’s almost impossble not to use terms that imply agency: “think”, “understand”, “learn”, “know”, “decide”, and so on. AI researchers use these words with very technical senses that are analogous to their usual meanings – but that aren’t the same, and that’s where things get misleading. When a person “summarises” some information, we mean that they pick out what they think is most important from the text; when an LLM does the same thing, it picks out the words that are statistically most clustered within the text and its training data. Are these two processes the same? They certainly don’t sound the same, and they often produce radically different “summaries”, because the human summariser is working from a far richer knowledge base and using different means to assess importance, often including what they know about the intended audience. It’s doing this well that’s usually taken as one of the hallmarks of intelligence. These are statistical techniques that cleave to the mean, which implies they will always by design produce middling answers. An amazingly insightful and creative answer is by definition unlikely, and so will be avoided by design.

There are other problems too. Machine learning, the science that underlies LLMs, is usually based on training from huge volumes of data. This permits impressive feats, such as being able to spot anomalies in cancer images better (in some cases) and more consistently than trained doctors. But that’s a weakness too: the idea that the future will be like the past, meaning that anything not in the training set runs the risk of being ignored as noise. The LLMs’ training data from the internet is text, not validated knowledge in any sense. And remember that repetition increases likelihood for an LLM, including repeatition of nonsense.

This lies at the heart of the problem of hallucination, where AIs confidently produce startlingly incorrect text. Why is this? Because text is all they have to work with: there is no model of the world as it is, and therefore no ability to correct. The most modern LLMs are now attempting to correct this, so far with little success.

So are we seeing intelligence from LLMs? We’re seeing something that presents like some parts of such intelligence. From this the industry has extrapolated that we are, for example, about to see models demonstrating “PhD level” ability – “soon”, but there never seems to be a demonstration. One characteristic of LLMs is that they don’t spontaneously do anything: they wait until prompted, and then reply. Does that sound like intelligent behaviour? – no planning, no forethought, no imagination? I work with a lot of PhD-level people, and none of them behave this way.

Some will object that there’s lots going on in industrial labs that we don’t see. Quite possibly. Maybe the insiders are seeing things that we outsiders don’t. But the common characteristic of the systems we outsiders do see is systems that can perform well on certain well-prescribed and -bounded tasks, but which then fail catastrophically when used in less constrained domains.

I am a materialist: I don’t believe that there’s anything supernatural about intelligence (however one defines it), or anything privileged about running an intelligent process on a biological (rather than a silicon) substrate. But this book strikes me as another over-hyped, somewhat confused and confusing contribution to our understanding of what AI is. It focuses on some entirely hypothetical potential harms (intelligent autonomous machines) while largely ignoring the very real current harms of bias, accuracy, and increasing economic and digital disparities.

2/5. Finished Thursday 16 January, 2025.

(Originally published on Goodreads.)

Mordew (Cities of the Weft, #1)

Alex Pheby (2020)

A deeply creative world-building novel that builds a city protected by magic in a world that’s clearly set against it.

It’s a novel in a style that doesn’t explain its premises but lets the reader infer what’s happening from the details of the small-scale presentation. In this it’s not always entirely successful, and even by the end there are a lot of loose ends – which I suppose set the stage for the sequel.

3/5. Finished Monday 13 January, 2025.

(Originally published on Goodreads.)