Skip to main content

Posts about university (old posts, page 3)

Why do people go to university?

Changes to admissions systems in the UK and Ireland simply tinker with the existing approach. They don't address the more fundamental changes in the relationship between university education, economic and social life.

We're in the middle of a lot of changes in academia, with the fees increase for students in England eliciting a response in Scotland, and a report in Ireland suggesting changes to the "points" system of admissions to introduce more targeting. Despite the upsets these changes cause to academics -- they heighten the arbitrary and distressing discontinuity in costs between Scottish and English students in St Andrews, for example -- they're actually quite superficial in the sense of not massively changing the degrees on offer or their structure. However, these would seem to be the areas in which change could be most fruitful, in response to changing patterns of life, patterns of access to information and the like. It's hard to generate enthusiasm for this amid the problems of managing the financial structures. In other words, the fees débâcle may be masking changes that we would otherwise be well advised to make.

What are these changes? First of all we need to understand what we believe about the lives and futures of the people who might attend university. For me this includes the following:

People don't know what they're passionate about. Passionate enough to want to spend a substantial portion of their lives on it, that is. There's a tendency to follow an "approved" path into a stable career, and this in may in some cases lead people to do the "wrong" degree as they worry about their job prospects. But if you're going into a career in something, you have to accept that you'll spend about half your waking life on it. It makes sense to be sure you'll enjoy it. So we need to focus on allowing people to find their passions, which argues against too-early specialisation and for a broader course of study.

My first postdoc supervisor, Chris Wadsworth, told me about 20 years ago that "it takes you 10 years to decide what you're actually interested in." In your mid-20s you tend to disregard statements like this and assume you know what your research interests are, but on reflection he was right: it did take me about 10 years to work out what I wanted to spend my career researching, and it wasn't really what I was doing back then: related, yes, but definitely off to one side. I've also become interested in a whole range of other things that were of no interest to me back then, not least because most of them didn't exist. If that's true of an academic, it's just as true of an 18-year-old undergraduate. You can have an idea what you like and what interests you, but not much more than that.

We can't teach enough. It used to be simpler: go to university, learn all the things you'll need, then go and practice those skills with marginally upgrading for the rest of your career. I can't think of many topics like that any more.

This changes the emphasis of education: it's not the stuff we teach that's important, it's the ability to upskill effectively. For that you need foundations, and you need to know the important concepts, and you need to be able to figure out the details for yourself. It's not that these details aren't important -- in computing, for example, they're critical -- but they're also changing so fast that there's no way we could keep up. And in fact, if we did, we'd be doing the students a dis-service by suggesting that this isn't a process of constant change.

The jobs that people will want to do in 20 years' time don't exist now. Fancy a career as a web designer? Didn't exist 20 years ago; 10 years ago it was a recognised and growing profession; lately it's become part and parcel of graphic design. The world changes very rapidly. Even if the job you love still exists, there's a good chance you'll want to change to another one mid-career. Again, the ability to learn new skills becomes essential. I suspect a lot of people -- including politicians -- haven't appreciated just how fast the world is changing, and that the pace of change is accelerating. You don't have to believe in the Sigularity to believe that this has profound implications for how economies and careers work.

We don't know where the future value comes from. In a time of increased financial stress, governments fall back on supporting courses that "obviously" support economic growth: the STEM subjects in science, engineering, technology and medicine. The only problem with this as an argument is that it's wrong. Most of the value in the digital age hasn't come from these areas. The profits at Apple and Google pale into insignificance behind the aggregate profits of the companies (often much smaller) making content to add value to the devices and services these companies provide. I've argued before that this value chain is best supported by humanities graduates, not scientists and engineers. (If you want a supporting example, consider this as a proposition: the whole argument about network neutrality is essentially an argument about whether ISPs should be allowed to tax the producers and consumers of the content they transmit in terms of its end-user value, rather than in terms of its transmission cost. That's where the value is.)

Does the above have anything to suggest about changes in admissions or the structure of degrees. To me it suggests a number of things. Have broad introductory years in which students are encouraged to explore their wider interests before specialising. (The Scottish broad curriculum attempts to do this, but in sciences we don't do it particularly well.) Focus teaching on core principles and on how these affect the evolution of tools and techniques. Also focus on students learning the tools and techniques themselves, and showing how they relate back to the taught core. Generate a set of smaller degree-lets that people can take over their career with less commitment: at a distance, in the evening, in compressed blocks, over a long period -- whatever, just suitable for people to do alongside working at something else. Above all, don't assume we (either the universities or the State) can pick winners in terms of subjects. We're definitely in a post-industrial world, that means new intellectual territory and that what worked before won't always work in the future. I hope computer science will continue to change the world, but I'm also completely confident that a history graduate will come up with one of the Next Big Things.

On not funding the arts and humanities

If we don't adequately fund the arts, where will all the digital content come from?

Recent noises from within the UK's funding structures suggest that the future for arts and humanities education is somewhat threatened. In a time of restricted resources (the argument goes) the available funding needs to be focused on topics that make a clear, traceable contribution to the national economy. This essentially means supporting the STEM subjects -- science, technology, engineering and medicine -- at the expense of the arts and humanities.

As a computer scientist I might be expected to be loosely in favour of such a move: after all, it protects my discipline's funding (at least partially). But this is to mis-understand the interconnected nature of knowledge, of scholarship, and of the modern world as a whole.

We need first to think about how people use their degrees. Contrary to popular belief (even amongst students), degrees don't generally lead to jobs -- and nor should they. It's true that we teach a lot of information and skills in a degree: how to program, how to analyse algorithms and understand different technologies, in the case of computer science. But this isn't the reason to get a degree.

What we try to teach are the critical skills needed to understand the world, contribute to it and change it. Computer science is a great example of this. Three years ago there were no tablet computers and no cloud computing: the field changes radically even on the timescales of a typical degree programme. So there's not really much point in focusing on particular technologies or languages. What we teach instead is how to learn new languages and technologies, and how to assess how they fit into the changing pattern of computer science. Put another way, we have to turn them into people who can learn and assimilate complex technological ideas throughout their lives, and create new ones

Education is what survives when what has been learnt has been forgotten

B.F. Skinner

This is even more true in humanities. Most people who study geography do not become geographers (or polar explorers, for that matter): they go into fields that require critical minds who can come to grips with complex ideas. But they bring to these jobs an appreciation of a complex and layered subject, an ability to deal with multiple simultaneous constraints and demands upon shared resources, the interaction of people with the natural world, and so forth. This is much more valuable than the specific knowledge they may have acquired: they have the ability to acquire specific knowledge whenever they need it, and to fit it into the wider scheme of their understanding.

But even if we accept in part the narrower view of education as a direct feeder for the economy -- and realistically we have to accept it at least to some extent -- reducing humanities graduates seems shortsighted. If we also accept that the future is of a digital and knowledge economy, then the technologies underlying this economy are only one part of it -- and probably only a small part. The rest, the higher-value services, come from content and applications, not directly from the technology.

Consider how much value has been created from building computers. Now consider how much value is created from selling things that use computers. Computer scientists didn't create much of the latter; nor did physicists, mathematicians, materials scientists or electronic engineers. Humanities people did.

So even aside from the reduction in quality of life that would come from reducing the contributions of people who've studied history and literature, there's a direct economic effect in play. Without such people, there'll be no-one to create the digital content and services on which the knowledge economy depends. (It's called knowledge economy, remember, not science economy.) Increasing the proportion of knowledgeable, educated people is valuable per se for the creativity those people unleash. The fact that we perhaps can't directly trace the route from university student places to value to society doesn't make that contribution unreal: it just means we're not measuring it.

When we went about inventing computers and the internet we had specific goals in mind, to do with scientific analysis and communication. But it turned out that the most socially significant impacts of these technologies didn't come from these areas at all: they came from people who thought differently about the technology and came up with applications that no computer scientist would ever have thought of. It still amazes me that no professional computer scientist -- including me -- ever dreamed of social network sites like Facebook, even though we happily talked about concepts like "social network" and of using computers to examine the ways people interacted at least five years before it debuted. We don't have the mindset to come up with these sorts of applications: we're too close to the technology. Scientists can happily develop services for science: it needs people who are closer to the humanities to develop services for humanity.

Metrics and the Research Evaluation Framework

Is the success of the UK and US in global university research rankings any more than a consequence of the choice of particular metrics? And does this explain the design of the funding and evaluation methodologies?

We're currently at the start of the UK's periodic Research Evaluation Framework (REF) process, during which every School and Department in every university in the country is evaluated against every other, and against international benchmarks, to rate the quality of our research. As one might imagine, this isn't the most popular process with the academics involved (especially those like me who are directors of research and so have to manage the return of our School's submission).

It certainly absorbs a lot of intellectual capacity. In principle it shouldn't: the REF is intended to generate a fair reflection of a School's behaviour, is supposed to measure those activities that academics working at the cutting edge would do anyway, and to encourage people to adopt best practices by rewarding them in the assessment methodology. In reality of course the stakes are so high that the process can distort behaviour, as people (for example) target their published work at venues that will be good in the REF (to get a better REF result) and not necessarily those that would give the work the best exposure in the appropriate community. In principle this sort of arbitrage is impossible as the REF should value most highly those venues that are "best" -- a sort of academic version of the efficient markets hypothesis. In reality there's often a difference, or at least a perceived difference. In computer science, for example, we often use conferences in preference to journals for really new results because they get the word out faster into the community. Other sciences don't do this, so we worry that conferences (which in other fields are quite peripheral to the publications process) won't be judged as seriously even though they might be scientifically more valuable.

Managing our engagement with the REF process has made me think more widely about what the REF is trying to achieve, and how it's achieving it. In particular, are the outcomes the REF encourages actually the ones we should be encouraging?

If we look at universities across the world, one thing that stands out is the way that, in science and technology at any rate,  the UK comes a close second to the US (and way above all other countries) in terms of publications, citations, patents and other concrete outputs of research -- despite the smaller population and even smaller proportional science budget. Despite the perpetual worries about a brain-drain of the best scientists to better-funded US Schools, the top UK institutions continue to do amazingly well.

The US and the UK also have strikingly similar funding models for universities. A small number of funding agencies, mostly national, disburse funding to Schools in particular fields according to priorities that are set at least notionally by the researchers (known in the UK as the Haldane Principle) but also and increasingly with regard to nationally-defined "strategic priorities" (usually in defence or security in the US). The agencies are constantly pressured about the value they return for taxpayers' money -- disproportionately pressured, given the relatively small sums involved -- and so tend to adopt risk-reduction strategies such as following their money and increasing the funding of individuals and groups who have had previous funding and have not made a mess of it. This has three consequences: it becomes harder for junior staff to make their names on their own; it concentrates funding in a few institutions that are able to exploit their "famous names" to acquire follow-on funding; and, over time, it subtly encourages ambitious and talented people to gravitate to these well-funded institutions. The medium-term result is the creation of larger research-intensive Schools that absorb a relatively large proportion of the available funding (and to a lesser extent talent).

One can argue about whether this is a good outcome or not. Proponents would claim that it generates a "critical mass" of researchers who can work together to do better work. Critics would argue that larger Schools can rapidly become too large for comfortable interaction, and that the focus needed to grow can come at the expense of cross-disciplinary research can discourage people from undertaking risky projects, since these might damage (or at least not advance) the School's collective reputation.

Do these two processes -- high impact and winner-takes-most funding -- run together? Is one causative of the other? It seems to be that they're actually both a consequence of a more basic decision: the choice of metrics for evaluation. And this in turn tells us where the REF fits in.

The UK and US systems have chosen specific, measurable outcomes of research, in terms of a broadly-defined international "impact". Given this choice, the funding model makes perfect sense: find groups that perform well under these metrics, fund them more (to get more good results) and help them to grow (to create more people who do well under the metrics). The system then feeds-back on itself to improve the results the country gets under it's chosen framework for evaluation.

This all seems so logical that it's worth pointing out that there's nothing inherently "right" about this choice of metrics, that there are alternatives, and that the current system is in conflict with other parts of universities' missions.

A noticeable feature of the US and UK system is that they are increasingly two-tier, with research-led institutions and more general teaching institutions doing little or no research -- even in fields that don't actually benefit as much from economies of scale or critical masses of researchers (like mathematics). This means that students at the teaching institutions don't get exposure to the research leaders, which is one of the main benefits of attending a research-led university. This is bound to have an impact on student satisfaction and achievement -- and is explicitly excluded from the REF and similar metrics. It's interesting to note that, in the university rankings that include student-centred metrics such as those performed in the UK by the Sunday Times and the Guardian, the sets of  institutions at the top are shuffled compared to the research-only rankings. (St Andrews, for example, does enormously better in surveys that take account of the student experience over those that focus purely on research. We would argue this is a good thing.)

If we accept the choice of metrics as given, then experience seems to suggest that the UK's evaluation and funding structures are optimised to deliver success against them. This is hardly surprising. However, one could also choose different sets of equally defensible metrics and get radically different results. If one takes the view, for example, that the desirable metric is to expose as many students as possible to a range of world-class researchers, one could set up a system which eschews critical mass and instead distributes researchers evenly around a country's institutions. This is in fact roughly what happens in France and Italy, and while I can't say that this is a direct consequence of a deliberate choice of metrics, it's certainly consistent with one.

There is nothing so useless as doing efficiently that which should not be done at all.

Peter Drucker

Like many things, this is actually about economics: not in the narrow sense of money, but in the broad sense of how people respond to incentives. Rather than ask how well we do under REF metrics, we should perhaps ask what behaviours the REF metrics incentivise in academics and institutions, and whether these are the behaviours that are of best overall social benefit to the country. Certainly scientific excellence and impact are of vital importance: but one can also argue that the broadest possible exposure to excellence, and the motivational effect this can have on a wider class of students, is of equal or greater importance to the success and competitiveness of the country. The narrow, research-only metrics may simply be too narrow, and it'd be a mistake to optimise against them if in doing so -- and achieving "a good result" according to them -- we destroyed something of greater value that simply wasn't being measured.

The changing student computing experience

I'm jealous of my students in many ways, for the things they'll get to build and experience. But they should be jealous of me, too.

It was graduation week last week, which is always a great bash. The students enjoy it, obviously -- but it's mainly an event for the parents, many of whom are seeing their first child, or even the first child in their extended family, succeed at university. It's also the time of year when we conduct post mortem analyses of what and how we've taught throughout the year, and how we can change it for the better in the next session.

One of the modules I teach is for second-year undergraduates on data structures, algorithms, applied complexity and other really basic topics. It's a subject that's in serious danger of being as dry as ditchwater: it's also extremely important, not only because of it's applications across computer science but also because it's one of the first experiences the students have of the subject, so it's important that it conveys the opportunities and excitement of computer science so they don't accidentally nuke their lives by going off to do physics or maths instead.

One of the aspects of making a subject engaging is appealing to the students' backgrounds and future interests -- which of course are rather different to the ones I had when I was in their position 25 years ago. (I initially wrote "quarter of a century ago," which sounds way longer somehow.) So what are the experiences and aspirations of our current students?

Many get their first experience of programming computers with us, but they're all experienced computer users who've been exposed to computers and the internet their entire lives. They're the first generation for whom this is true, and I don't think we've really assimilated what it means. They're completely at home, for example, in looking up background material on Wikipedia, or surfing for alternative sources of lectures and tutoring on YouTube and other, more specialised sites. They can do this while simultaneously writing email, using Facebook and replying to instant messages in a way that most older people can't. They're used to sharing large parts of themselves with their friends and with the world, and it's a world in which popularity can sometimes equate with expertise in unexpected ways. It's hard to argue that this diversity of experience is a bad thing, and I completely disagree with those whom have done so: more information on more topics collected from more people can only be positive in terms of exposure to ideas. For an academic, though, this means that we have to change how and what we teach: the facts are readily available, but the interpretation and criticism of those facts, and the balancing of issues in complex systems, are something that still seem to benefit from a lecture or tutorial setting.

Many of the students have also built web sites, of course -- some very complex ones. Put another way, they've built distributed information systems by 17, and in doing so have unknowingly made use of techniques that were at cutting edge of research less than 17 years ago. They expect sites to be slick, to have decent graphics and navigation, to be linked into social media, and so forth. They've seen the speed at which new ideas can be assimilated and deployed, and the value that information gains when it's linked, tagged and commented upon by a crowd of people. Moreover they expect this to continue: none of them expects the web to fragment into isolated "gated communities" (which is a fear amongst some commentators), or to become anything other than more and more usable and connected with time.

I'm jealous of my students, for the web that they'll have and the web that many of them will help to build. But before I get too envious, it's as well to point out that they should be jealous of me too: of the experience my peers and I had of computers. It's not been without impact.

One of the things that surprises me among some students is that they find it hard to imagine ever building some of the underpinning software that they use. They can't really imagine building an operating system, for example, even though they know intellectually that Linux was built, and is maintained, by a web-based collaboration. They can't imagine building the compilers, web servers and other base technology -- even though they're happy to use and build upon them in ways that really surprise me.

I suspect the reasons for this are actually embedded into their experience. All the computers, web sites and other services they've used have always had a certain degree of completeness about them. That's not to say they were any good necessarily, but they were at least functional and usable to some degree, and targeted in general at a consumer population who expected these degrees of functionality and usability (and more). This is radically different to the experience we had of unpacking a ZX-80, Acorn Atom or some other 1980's vintage home computer, which didn't really do anything -- unless we made it do it ourselves. These machines were largely blank slates as far as their functions were concerned, and you had to become a programmer to make them worth buying. Current games consoles criminalise these same activities: you need permission to program them.

It's not just a commercial change. A modern system is immensely complex and involves a whole stack of software just to make it function. It's hard to imagine that you can actually take control all the way down. In fact it's worse than that: it's hard to see why you'd want to, given that you'd have to re-invent so much to get back to the level of functionality you expect to have in your devices. As with programming languages, the level of completeness in modern systems is a severe deterrent to envisioning them, and re-building them, in ways other than they are.

Innovation, for our students, is something that happens on top of a large stack of previous innovation that's just accepted and left untouched. And this barrier -- as much mental as technological -- is the key difference between their experience of computing and mine. I grew up with computers that could be -- and indeed had to be -- understood from the bare metal up. One could rebuild all the software in a way that'd be immensely more challenging now, given the level of function we've come to expect.

This is far more of a difference than simply an additional couple of decades of experience with technology and research: it sits at the heart of where the next generation will see the value of their efforts, and of where they can change the world: in services that sit at the top of the value chain, rather than in the plumbing down at its base. Once we understand that, it becomes clearer what and how we should teach the skills they'll need in order best to apply themselves to the challenges they'll select as worth their time. And I'll look forward to seeing what these result in.

Congratulations to the graduating class of 2011. Have great lives, and build great things.

Why we have code

Coding is an under-rated skill, even for non-programmers.

Computer science undergraduates spend a lot of time learning to program. While one can argue convincingly that computer science is about more than programming, it's without doubt a central pillar of the craft: one can't reasonably claim to be a computer scientist without demonstrating a strong ability to work with code. (What this says about the many senior computer science academics who can no longer program effectively is another story.) The reason is that it helps one to think about process, and some of the best illustrations of that comes from teaching.

Firstly, why is code important? One can argue that both programming languages and the discipline of code itself are two of the main contributions computer science has made to knowledge. (To this list I would add the fine structuring of big data and the improved understanding of human modes of interaction -- the former is about programming, the latter an area in which the programming structures are still very weak.) They're so important because they force an understanding of a process at its most basic level.

When you write computer software you're effectively explaining a process to a computer in perfect detail. You often get a choice about the level of abstraction you choose. You can exploit the low-level details of the machine using assembler or C, or use the power of the machine to handle the low-level details and write in Haskell, Perl, or some other high-level language. But this doesn't alter the need to express precisely all that the machine needs to know to complete the task at hand.

But that's not all. Most software is intended to be used by someone other than the programmer, and generally will be written or maintained in part by more than one person -- either directly as part of the programming team or indirectly through the use of third-party compilers and libraries. This implies that, as well as explaining a purpose to the computer, the code also has to explain a purpose to other programmers.

So code, and programming languages more generally, are about communication -- from humans to machines, and to other humans. More importantly, code is the communication of process reduced to its  purest form: there is no clearer way to describe the way a process works than to read well-written, properly-abstracted code. I sometimes think (rather tenuously, I admit) this is an unexpected consequence of the halting problem, which essentially says that the simplest (and generally only) way to decide what a program does is to run it. The simplest way to understand a process is to express it as close to executable form as possible.

You think you know when you learn, are more sure when you can write, even more when you can teach, but certain only when you can program. Alan Perlis

There are caveats here, of course, the most important of which is that the code be well-written and properly abstracted: it needs to separate-out the details so that there's a clear process description that calls into -- but is separate from -- the details of exactly what each stage of the process does. Code that doesn't do this, for whatever reason, obfuscates rather than explains. A good programming education will aim to impart this skill of separation of concerns, and moreover will do so in a way that's independent of the language being used.

Once you adopt this perspective, certain things that are otherwise slightly confusing become clear. Why do programmers always find documentation so awful? Because the code is a clearer explanation of what's going on, because it's a fundamentally better description of process than natural language.

This comes through clearly when marking student assessments and exams. When faced with a question of the form "explain this algorithm", some students try to explain it in words without reference to code, because they think explanation requires text. As indeed it does, but a better approach is to sketch the algorithm as code or pseudo-code and then explain with reference to that code -- because the code is the clearest description it's possible to have, and any explanation is just clearing up the details.

Some of the other consequences of the discipline of programming are slightly more surprising. Every few years some computer science academic will look at the messy, unstructured, ill-defined rules that govern the processes of a university -- especially those around module choices and student assessment -- and decide that they will be immensely improved by being written in Haskell/Prolog/Perl/whatever. Often they'll actually go to the trouble of writing some or all of the rules in their code of choice, discover inconsistencies and ambiguities, and proclaim that the rules need to be re-written. It never works out, not least because the typical university administrator has not the slightest clue what's being proposed or why, but also because the process always highlights grey areas and boundary cases that can't be codified. This could be seen as a failure, but can also be regarded as a success: coding successfully distinguishes between those parts of an organisation that are structured and those parts that require human judgement, and by doing so makes clear the limits of individual intervention and authority in the processes.

The important point is that, by thinking about a non-programming problem within a programming idiom, you clarify and simplify the problem and deepen your understanding of it.

So programming has an impact not only on computers, but on everything to which one can bring a description of process; or, put another way, once you can precisely describe processes easily and precisely you're free to spend more time on the motivations and cultural factors that surround those processes without them dominating your thinking. Programmers think differently to other people, and often in a good way that should be encouraged and explored.