C++ template macroprogramming versus Lisp macros

C++ template macroprogramming versus Lisp macros

Following on from Lisp macros versus Rust macros, I also want to compare C++ templates to Lisp macros.

Templates in C++ were designed as a way of generating typed versions of classes. The template declares some type variables that can be used as placeholders within a class declaration. When the template is instanciated and provided with actual type names, these are substituted for the type variables and the class is expanded. (It used to literally happen like this, so each use generated a completely new class. Modern compilers are smart enough to avoid the code repetition.) A classic example is a typed singly-linked list:

  template<typename A>
  struct List<A> {
    A value;
    struct List<A> next;
  };

However, the template system also allows values to be used in templates instead of (or as well as) type names. When these are encountered they are expanded at compile-time, and may cause further templates to be expanded. A classic example of this is to pre-compute some factorials:

  template<unsigned n>
  struct factorial {
    enum { value = n * factorial<n - 1>::value };
  };

  template <>
  struct factorial<0> {
    enum { value = 1 };
  };

In this code the first clause defines a template that defines the usual recursive factorial calculation. The second clause bottoms-out this recursion by defining a specialised template that directly provides the factorial of zero. This can then be used in code such as:

  template<unsigned n>
  struct factorial {
    enum { value = n * factorial<n - 1>::value };
  };

  template <>
  struct factorial<0> {
    enum { value = 1 };
  };

  int main() {
    std::cout << factorial<7>::value << std::endl;
  }
5040

which outputs the factorial of 7 as one might expect – but with the factorial having been computed at compile-time and inserted into the code as a literal, so the calculation introduces no run-time calculation.

There are some stringent limitations on the ways in which templates can be expanded. They can’t have mutable variables for a start (that’s why we needed to use the recursive factorial algorithm). Weirdly this makes the template language a functional programming sub-set of C++. Having said that, as with Lisp macros, it allows calculations that can be statically performed forward to be brought forward to compile-time. This makes it useful for building read-only tables, unrolling loops, and the like.

It’s claimed that templates are now so akin to “normal” C++ that they incur less of a readability penalty. That’s a subjective statement that may be true. But the template language isn’t C++. While one can write programs in it, they’re nothing like the C++ one would normally write. The template language is Turing complete, but that just means one can encode any computation, not that one can encode any particular program – and most template programs will require massive re-writing from the code one would write normally for execution at run-time. Template macroprogramming is therefore a non-trivial programming task to undertake.

Again as with Rust versus Lisp, C++ templates are an extension to the language rather than a core part of it, although they’re now used quite extensively in the standard library for generic typing. Also as with Rust, use of templates is semantically and syntactically distinct from “normal” C++ code or syntax, and it’s this that causes the programming load.

A Lisp macro for the factorial computation, by contrast, looks almost exactly like a normal factorial function that can access the entire language, both when defined and when used:

  (defmacro factorial (n)
    (labels ((fact (m)
               (if (= m 0)
                   1
                   (* m (fact (1- m))))))
      `,(fact n)))

  (princ (factorial 7))
5040

The choice of macro or function (defmacro or defun) has no further syntactic implications for the rest of the program, and no restrictions on the code that can be used within the definition; we could re-write the to use iteration, mutable variables, or any other code, and it would simply be executed at compile-time. The whole language is there, all the time. We can show this by taking a factorial function written in “normal” Lisp and macro-ifying it to be computed at compile-time:

  (defun fact (m)
    "Compute the factorial of M."
    (if (= m 0)
        1
        (* m (fact (1- m)))))

  (defmacro factorial (n)
    `,(fact n))

  (princ (factorial 7))
5040

More importantly, Lisp (and indeed Rust) macro can abstract over syntax as well as classes and values, and so allow the language to be extended with new first-class-at-compile-time structures. Templates are restricted to instanciating templates written with a fixed syntax; in Lisp the syntax has to be “Lisp-like”, although that’s a very light restriction; and in Rust a macro can use any syntax that Rust can tokenise.

While C++ templates are sometimes described as macroprogramming (or metaprogramming), they’re addressing a substantially different use case to that addressed by Lisp or Rust macros, and doing so within a more restricted computational and syntactic envelope.

Lisp macros versus Rust macros

Lisp macros versus Rust macros

I was talking with one of my colleagues the other day about programming languages, and we ended up comparing macros in Rust and Lisp.

Rust has a couple of couple of different kinds of macros: declarative macros that pattern-match on arguments to emit code; and procedural macros that perform more general code-to-code transformations. Lisp has only one kind that operates from code to code.

Both approaches are far more powerful than the macros in C and C++, which are basically just string expanders. Indeed, one definition of macroprogramming is that it’s writing code that returns code, and there’s a reasonable argument that C’s “macros” are programs that return strings and therefore aren’t macros at all. But that’s just bring pedantic.

The Rust operations seem quite awkward, at least from a Lisp perspective. They’re invoked in a way that’s syntactically different to ordinary code, so it’s always possible to see in the source code where procedural code generation is occurring. Perhaps that’s not an entirely bad thing, as it makes it obvious when compile-time computation occurs – although one might also argue that a true language extension or DSL should be so seamless that you don’t need to see it.

I think a more basic difference is in how Rust needs to handle code-type arguments. A macro is a function from code to code, so it needs to represent its code arguments in a way that the macros (which is also code) can manipulate. Lisp’s homoiconicity makes this trivial: code is a list, just like non-code, and can ba manipulated as such. Rust doesn’t have this, so code needs to be passed to macros as a token stream that’s been parsed from the program text. That’s a reasonable solution to the problem, but it does mean that to write macros you need to understand how Rust is tokenised. You also get a token stream, not an abstract syntax tree (AST), which means that manipulating complex code is more difficult: essentially you need to re-create as much of the AST as you need and traverse it within the macro body. There’s a standard library that does this for Rust’s own syntax, which simplifies matters somewhat but still means that writing macros exposes the programmer to the underlying representations. Hopefully they won’t change, as that would break a lot of macros.

By contrast, Lisp macros only require an understanding of Lisp itself, not of its internals, and can operate on the entire detailed structure of the code arguments. It’s a striking example of the power of homoiconicity.

An approach closer to that of Rust is also available, in Common Lisp anyway, in the form of reader macros that modify the Lisp reader to allow access to the character stream as the source code is being read. I think I’ve only ever encountered read macros for providing new styles of literals, or variants of strings that benefit from being treated slightly differently at read-time: they’re an unusual use case, anyway, and Lisp makes the more usual case of macros manipulating Lisp code a lot simpler, without exposing the programmer to parsing.

I suspect the main difference between the two languages’ approaches is that macros are additional to Rust but inherent to Lisp. None of the core of Rust uses macros: they’re for extensions. By contrast, even common operations like defun in Lisp are actually macros that expand to the simpler core operations. This perhaps explains the Rust designers’ decision to make macros syntactically distinct.

Processing MicroMoth recordings offline

Processing MicroMoth recordings offline

The uMoth generates .wav files, uncompressed waveforms of what it records. These need to be processed to identify any bird calls within them.

This function is integrated in BirdNET-Pi, which does recording and classification, and provides a web GUI. With the uMoths we need to provide classification as part of a data processing pipeline. We can however make direct use of the classifier “brain” within BirdNET-PI, which is unsurprisingly called BirdNET-Analyzer.

Installation

I’m working on a 16-core Intel Core i7@3.8GHz running Arch Linux.

First we clone the BirdNET-Analyzer repo. This takes a long time as it includes the ML models, some of which are 40MB or more.

    git clone https://github.com/kahst/BirdNET-Analyzer.git
    cd BirdNET-Analyzer

The repo includes a Docker file that we can use to build the analyser in a container.

    docker build .

The container setup is quite basic and is probably intended for testing rather than production, but it gives a usable system that could then be embedded into something more usable. The core of the system is the analyze.py script.

Analysing some data (AKA identifying some birds!)

The container as defined looks into its /example directory for waveforms and analyses them, generating text file for each sample. The easiest way to get it to analyse captured data is to mount a data directory of files onto this mount point (thereby shadowing the example waveform provided).

There are various parameters that configure the classifier. I copied the defaults I was using with BirdNET-Pi, only accepting classifications at or above 0.7 confidence.

    docker run -v /var/run/media/sd80/DATA:/example birdnet-analyzer analyze.py --rtype=csv --min_conf=0.7 --sensitivity=1.25

This crunches through all the files (982 of them from my first run) and generates a CSV file for each. An example is:

Start (s) End (s) Scientific name Common name Confidence
6.0 9.0 Corvus monedula Eurasian Jackdaw 0.9360
9.0 12.0 Corvus monedula Eurasian Jackdaw 0.8472
12.0 15.0 Corvus monedula Eurasian Jackdaw 0.8681
15.0 18.0 Corvus monedula Eurasian Jackdaw 0.8677
24.0 27.0 Columba palumbus Common Wood-Pigeon 0.9198
27.0 30.0 Columba palumbus Common Wood-Pigeon 0.7716
45.0 48.0 Corvus monedula Eurasian Jackdaw 0.8023
48.0 51.0 Corvus monedula Eurasian Jackdaw 0.7696

Those are entirely credible identifications. The start- and end-point offsets allow rough location within the recording. (BirdNET segments the recordings into 3s chunks for analysis.)

This is clearly not as straightforward as BirdNET-Pi, nor as immediately satisfying. But it does scale to analysing lots of data (and could be made to do so even better, with a better front-end to the container), which is important for any large-scale deployment.

Deploying a MicroMoth

Deploying a MicroMoth

The MicroMoth (or uMoth) from Open Acoustic Devices is the same as their better-known AudioMoth recorder but with a significantly smaller footprint. It’s just a traditional recorder or data-logger, with now on-board analysis and no wireless connectivity. I got hold of some to use in a larger project we’re thinking about running, and they’re not kidding about the “micro” part.

nil

The uMoth uses the same software as the AudioMoth, and therefore the same configuration app available from the apps page – for 64-bit Linux in my case. It downloads as a .appimage file, which is simply a self-contained archive. It needed to be marked as executable, and then ran directly from a double-click. (The page suggests that there may be some extra steps for some Linux distros: there weren’t for Arch.)

I then followed the configuration guide. The time is set automatically from the computer’s clock when you configure the device.

For testing I chose two recording periods, 0400–0800 and 1400–1600.

nil

As shown this will, with the default 48KHz sampling, generate about 2GB of data per day and use about 70mAh of energy. For my tests I just hung the device out of the window on a USB tether for power: it works fine drawing power from the USB rather than from the battery connector.

nil

This turned out not to record anything, because the time is lost if the power is disconnected, even though the configuration is retained. (The manual does actually say this, with a suitably close reading. It could be clearer.) There’s a smartphone app that can reset the time once the device is in the field and powered-up, though, by making an audio chime that encodes the current time and location in a way the board can understand. Flashing the device with the “Always require acoustic chime on switching to CUSTOM” makes it wait after power is applied until its time is set.

The red LED flashes when the device is recording. The green LED flashes when the device is waiting for a recording period to start. The red LED stays lit while the time is unset.

Pascal Costanza’s highly opinionated guide to Lisp

Pascal Costanza’s highly opinionated guide to Lisp

Pascal Costanza’s Highly Opinionated Guide to Lisp

Part introduction, part paean to the language’s power, part study guide, while dipping into an eclectically-chosen subset of Lisp features that really illustrate what makes it different.