Our most recent distinguished lecture was the best I've heard so far, and on a fascinating topic I'd like to know more about.
We run two distinguished lectures each academic year, inviting an academic in to teach a four-hour course on some topic that often we don't really have expertise with in St Andrews. It exposes undergraduates, grad students and staff to new topics and ways of thinking about research. The series goes back to 1969 and includes some of the best-known names in computer science.
Last semester's speaker was Larry Yaeger from the University of Indiana (on sabbatical at the University of Hertfordshire), who talked about artificial life: using computers to study the processes of evolution, speciation and adaptation. It's a topic that sits on the boundary between novel computing and theoretical biology.
Artificial life sounds too much like science fiction to be "real" computer science: do we understand enough about life to be able to study it in the abstract?, and, if not, can artificial life have any scientific content. If you confine yourself enough, of course, then the answer to both questions is a qualified "yes", but the question then becomes, does it tell you anything meaningful about anything? Larry's talk showed that even quite abstracted artificial life scenarios still give some high-level information about the potential for systems design, especially for very dynamic adaptive systems in changing environments.
Larry's work has focused on building multi-agent simulations of processes, seeing how simple rule sets can give rise to complex behaviours. This has culminated in a system called Polyworld, that lets users set up "genetically" based behaviours for agents. (There are some very cool movies of it all working.) The genetic basis -- completely synthetic and higher-level that real genetics -- means that agents can evolve through mutation and cross-over.
The part I found most interesting was the way that these systems -- like the natural systems they're abstractions of -- tend not to do optimisation per se. Instead they find, and stick with, solutions that are "good enough". You get to a balance between evolutionary pressure not being strong enough, and the benefits not being great enough, for further changes to take place. The difference with traditional engineering is quite profound, both in this satisfaction with the sub-optimal but also in the fact that the selection is dynamic, so if the chosen approach ceases to be "good enough" as the environmental pressures change it will shift to another process as a matter of course. You get this dynamism all over chemistry, too, where chemical equilibrium remains a dynamic process with lots of reactions going on all the time without changing the macroscopic concentrations of the reagents involved. It's easy to mistake this for a static system, which it most definitely isn't: I think this is a mistake a lot of scientists and engineers make, though, and it's something we probably need to address when designing adaptive systems or sensor networks that need to operate against or within a complex environment. To do this we'd need to give up a lot of intuitions we have about design, and the possibility of a single "correct" solution to a problem, and think instead of a design space in which the system is (to some extent) free to explore -- and make this design space, and the exploration of it, concepts that are propagated to run-time.
I think this kind of approach makes sense even if you don't embrace the genetic algorithms style view of the world (which in the main I don't). In some ways this is a measure of the success of artificial life research: it's illuminating concepts that are of general utility outside the context from which they're being drawn, that can be used to influence other approaches to systems design without our having to junk everything we already know, which we're clearly not going to do. These sorts of incremental changes are far more useful than revolutions, in many ways, but they come about from thinking that's more-or-less divorced from mainstream thinking. It's a good illustration of why blue-skies research is important, and that knowledge really is all one piece with interconnections and interactions that we can't predict.