What contributions can computer scientists uniquely make to the latest scientific challenges? The answer may require us to step back and look at how instruments affect science, because the computer is the key instrument for the scientific future.
In the late seventeenth century, science was advancing at an extraordinary rate — perhaps the greatest rate until modern times. The scientists of this era were attempting to re-write the conceptual landscape through which they viewed the universe, and so in many ways were attempting something far harder than we typically address today, when we have a more settled set of concepts that are re-visited only periodically. This also took place in a world with far less of a support infrastructure, in which the scientists were also forced to be tool-makers manufacturing the instruments they needed for their research. It’s revealing to look at a list of scientists of this era who were also gifted instrument-makers: Newton, Galileo, Hooke, Huygens and so on.
Antonie van Leeuwenhoek is a classic example. He revolutionised our understanding of the world by discovering single-celled micro-organisms, and by documenting his findings with detailed drawings in letters to the Royal Society. The key instrument in his research was, of course, the microscope, of which he manufactured an enormous number. Whilst microscopes were already known, van Leeuwenhoek developed (and kept secret) new techniques for the manufacture of lenses which allowed him significantly to advance both the practice of optics and the science of what we would now term microbiology.
The important here is not that early scientists were polymaths, although that’s also a fascinating topic. What’s far more important is the effect that tooling has on science. New instruments not only provide tools with which to conduct science; they also open-up new avenues for science by revealing phenomena that haven’t been glimpsed before, or by providing measurements and observations of details that conflict with the existing norms. The point is that tools and science progress together, and therefore that advances in instrument-making are valuable both in their own right and in the wider science they facilitate.
Not all experimental scientists see things this way. It’s fairly common for those conducting the science to look down on the instrument-makers as mere technicians, whose efforts are of a different order to those involved in “doing” the science. It’s fair to say that the scientists of the seventeenth century wouldn’t share (or indeed understand) this view, since they were in a sense much closer to the instrument as a contributor to their research. Looked at another way, new experiments then typically required new instruments, rather than as now generally being conducted with a standard set of tools the researcher has to hand.
What are the instruments today whose advance will affect the wider scientific world? “Traditional” instrument-making is still vitally important, of course, and we can even regard the LHC as a big instrument to used in support of particular experiments. But beyond this we have “non-traditional” instruments, of which computers are by far the most commonplace and potentially influential.
I’ve talked previously about exabyte-scale science and the ways in which new computing techniques will affect it. Some experimenters overlook the significance of computational techniques — or, if they do see them, regard them as making technician-level rather than science-level contributions to knowledge. Even more experimenters overlook the impact that more rarefied computer science concerns such as programming languages, meta-data and search have on the advancement of knowledge. These views are restricted, restricting, and (in the worst case) stifling. They are also short-sighted and incorrect.
At the large scale, computational techniques often offer the only way of “experimenting” with large-scale data. They can be used to confirm hypotheses in the normal sense, but there are also examples where they have served to help derive new hypotheses by illuminating factors and phenomena in the data that were previously unsuspected, and furthermore could not have been discovered by any other means. The science is advanced by the application of large-scale computing to large-scale data, possibly collected for completely different purposes.
In that sense the computer is behaving as an instrument that opens-up new opportunities in science: as the new microscope, in fact. This is not simply a technical contribution to improving the way in which traditional science is done: coupled with simulation, it changes both what science is done and how it is done, and also opens-up new avenues for both traditional and non-traditional experiments and data collection. A good example is in climate change, where large-scale simulations of the atmosphere can confirm hypotheses, suggest new ones, and direct the search for real-world mechanisms that can confirm or refute them.
At the other end of the scale, we have sensor networks. Sensor networks will allow experimental scientists directly to collect data “in the wild”, at high resolution and over long periods — things that’re difficult or impossible with other approaches. This is the computer as the new microscope again: providing a view of things that were previously hidden. This sort of data collection will become much more important as we try to address (for example) climate change, for which high-resolution long-term data collected on land and in water nicely complement larger-scale space-based sensing. Making such networks function correctly and appropriately is a significant challenge that can’t be handled as an after-thought.
At both scales, much of the richness in the data comes from the ways it’s linked and marked-up so as to be searched, traversed and reasoned-with. While some experimental scientists develop strong computational techniques, very few are expert in metadata, the semantic web, machine learning and automated reasoning — although these computer science techniques are all key to the long-term value of large-scale data.
As with the earliest microscopes, the instrument-maker may also be the scientist, but that causes problems perhaps more severe today than in past centuries. Like it or not, we live in an era of specialisation, and in an era where it’s impossible to be really expert in one field let alone the several one might need in order to make proper contributions. But the development of new instruments — computational techniques, sensing, reasoning, matadata cataloguing — is nevertheless key to the development of science. In the years after van Leeuwenhoek, several microbiologists formed close collaborations with opticians who helped refine and develop the tools and techniques available — allowing the microbiologists to focus on their science while the opticians focused on their instruments. (Isn’t it interesting how “focused” really is the appropriate metaphor here?) Looked at broadly, it’s hard to say which group’s contribution was more influential, and in some senses that’s the wrong question: both focused on what they were interested in, solving hard conceptual, experimental and technological problems along the way, and influencing and encouraging each other to perfect their crafts.
It’s good to see this level of co-operation between computer scientists and biologists, engineers, sustainable-development researchers and the rest beginning to flower again, at both ends of the scale (and at all points in between). It’s easy to think of instruments as technical devices devoid of scientific content, but it’s better to think of them both as having a core contribution to make to the concepts and practice of science, and as having a fascination in their own right that gives rise to a collection of challenges that, if met, will feed-back and open-up new scientific possibilities. The microscope is a key example of this co-evolution and increased joint value from the past and present: the computer is the new microscope for the present and future.