Skip to main content

Posts about university (old posts, page 2)

The shifting balance of university power

The shifts in economic power are being mirrored in the university sector, both in education and research. It's happened before.

The global financial crisis has exposed a lot of unfunded holes in different parts of the economy, and the resulting cuts and re-prioritisations are affecting the ways in which a lot of organisations operate. Universities find themselves uncharacteristically in the front line of this process.

In terms of teaching, the sudden enormous increase in fees in England is currently being resisted -- futilely, I think -- in Scotland. The shifting of burden onto students will have long-ranging impact because, as isn't often realised, the increase in fees is being coupled with a projected decrease, applied differentially across subjects, in core State funding for teaching. This means that the huge influx of money from fees will be largely offset by a decrease in other funding: the universities will be no better off.

In research, there is already a shift in the amounts of money available from funding agencies as well as in the ways that money is distributed. Crudely put, in future we'll see a smaller number of larger grants awarded to larger institutions who already have significant research funding from these same funding sources: the funding bodies will follow their own money to reduce risk.

We have no idea what impact these changes will have on the quality of education, research, innovation or scholarship, or on the rankings that (very imperfectly) track these features. What we do know is that they're all intertwined, and that major shifts in the global balance of quality in education and research are not just possible, but likely.

People looking at the university rankings tend to think that they reflect a long-standing, established assessment that changes only peripherally as "new" universities improve. This is actually very far from being the case. To see why, we need to consider the history of universities and their evolving quality relative to each other over the past six to seven hundred years. To simplify I'll focus on what we might regard as the modern, Western model of universities and ignore the university-like institutions in the Islamic caliphate, the House of Wisdom and the like -- although I really shouldn't, and it'd be good to see how they fit into the story.

The designation of "best university in the world," whatever that may mean, has shifted several times. Initially it went to the University of Bologna as the first modern, Western university. But it soon shifted in the eleventh century to be the University of Paris, largely through the controversial fame of Peter Abelard -- an uncharacteristically scandal-prone academic. Over the course of the next centuries the centre of the academic world moved again, to Oxford and Cambridge. So far so familiar -- except that the dynamism that pushed these institutions forward didn't sustain itself. By the late nineteenth century the centre of research and teaching in physics and mathematics had shifted to Germany -- to the extent that a research career almost required a stint at a German institution. Oxford and Cambridge were largely reduced to teaching the sons of rich men. That's not to say that the Cavendish Laboratory and the like weren't doing excellent work: it's simply to recognise that Germany was "where it's at" for the ambitious and talented academic.

When people think of Einstein, they mostly know that he worked for a larger part of his career in the Institute for Advanced Study at Princeton. What isn't often appreciated is that this wasn't the pinnacle of his career -- which was in fact when he was awarded a chair at the University of Berlin. In the early twentieth century the US Ivy League was doing what Oxford and Cambridge were doing fifty years earlier: acting as bastions of privilege. It took the Second World War, the Cold War and the enormous improvements in funding, governance and access to elevate the American institutions to their current levels of excellence.

All this is to simplfy enormously, of course, but the location of the pre-eminent universities has shifted enormously, far more and far faster than is generally appreciated: Italy, France, England, Germany, the US. It isn't in any sense fixed.

Many people would expect China to be next. It's not so long ago that Chinese universities were starved of investment and talent, as the best minds came to the West. This is still pretty much the case, but probably won't be for much longer. There are now some extremely impressive universities in China, both entirely indigenous and joint ventures with foreign institutions. (I'm involved in a project with Xi'an Jiaotong Liverpool University, a joint venture between China and the UK.) It's only a matter of time before some of these institutions are recognised as being world-class.

Whether these institutions become paramount or not depends on a lot of factors: funding, obviously, of which there is currently a glut, for facilities, faculty and bursaries. But there's more to it than that. They have to become places where people want to live, can feel valued, and can rise to the top on their merits. You will only attract the best people if those people know their careers are open-ended and can grow as they do.

The pitfalls include appealing solely to a small and privileged demographic, one selected by its ability to pay and to act as patrons to otherwise weak and under-funded institutions, and of focusing on pre-selected areas to the exclusion of others. Both these are actually symptoms of the same problem: a desire to "pick winners," avoid risk, and score well against metrics that can never capture the subtleties involved in building world-class institutions of learning.

Hiring academics

Hiring anyone is always a risk, and hiring a new academic especially so given the nature of our contracts. So what's the best way to approach it?

What's brought this to mind is a recent discussion about the need to -- or indeed wisdom of -- interviewing new staff. The argument against interviewing is actually not all that uncommon. People tend to like and identify with -- and therefore hire -- people like themselves, and this can harm the diversity of a School when everyone shares a similar mindset. Another version of the same argument (that was applied in a place I used to work) says that you appoint the person who interviews best on the day, having shortlisted the five or so best CVs submitted regardless of research area.

I can't say I buy either version. In fact I would go the other way: you need a recruitment strategy that decides what strengths you want in your School -- whether that's new skills or building-up existing areas -- and then interview those people with the right academic fit and quality with a view to deciding who'll fit in with the School's culture and intentions.

My reasons for this are fairly simple. The best person academically isn't necessarily the best appointment. The argument that you employ the best researchers, in line with the need to generate as much world-class research as possible, is belied by the need to also provide great teaching (to attract the best students) and to engage in the impact that research has in the wider world. The idea that one would employ the best person on the day regardless of area strikes me as a non-strategy that would lead to fragmentation of expertise and an inability to collaborate internally. (It actually does the academic themselves no favours if they end up hired into a School with no-one to collaborate with.) Interviewing weeds-out the unsociable (and indeed the asocial) and lets one assess people on their personal as well as academic qualities. It's important to remember that academics typically have some form of tenure -- or at the very least are hard to fire -- and so one can't underestimate the damage that hiring a twisted nay-sayer can do.

In case this isn't convincing, let's look at it another way. Suppose we recruit a new full professor. Suppose that that they're about 45, and so have around 20 years to retirement. Assume further that they stay for that entire time and don't retire early or leave for other reasons. The average pre-tax salary for a full professor in the UK is around £70,000. So the direct salary cost of the appointment is of the order of £1,500,000. After that, the individual will retire and draw (for example) 1/3rd of salary for another 15 years. (Although this is paid for from an externally-administered pension fund, we can assume for our purposes that the costs of this fund come at least partially from university funds.) So the direct cost of that appointment doesn't leave much change out of £1,800,000.

(And that's just the direct costs, of course. There are also the opportunity costs of employing the wrong person, in terms of grants not won, students not motivated, reputations damaged and so forth. I have no idea how to calculate these, but I'm willing to believe they're of a similar order to the direct costs.)

So in appointing this individual, the interview panel is making a decision whose value to the university is of the order of £2,000,000, and probably substantially more. How much time and care would you take before you spent that much?

My experience has been mixed. A couple of places I interviewed had candidates in front of the interview committee (of four people, in one case) for a fifteen-minute presentation and a one-hour interview: one hundred minutes of face time to make what was quite literally a million-pound decision. By contrast I was in St Andrews for three days and met what felt like half the university including staff, teaching fellows, students, postdocs, administrators and others.

I think the idea that a CV is all that matters is based on the fallacy that the future will necessarily be like the past. I'm a contrarian in these things: if I interview someone for a job I don't care what they've done in the past, except to the extent that it's a guide to what they're going to do in the future. What you're trying to decide in making a hiring decision is someone's future value. Taken to its logical conclusion, what you ideally want to do is to identify people early who are going to be professors early -- and hire them, now! What you shouldn't do is only consider people with great pasts, because you get little or no value from that if it isn't carried forward. You want to catch the good guys early, and then you get all the value of the work put on their CVs going forward. You also get to benefit from the motivational power of promotion, which for many people will spur them to prove themselves.

Clearly there's a degree of unacceptable risk inherent in this, which we basically mitigate by employing people as junior academics. But this only works for the young guns if the institution's internal promotion scheme is efficient and will reward people quickly for their successes. Otherwise the young guns will look elsewhere, for an institution playing the long game and willing to take a chance on them -- and will do so with a better CV, that you've helped them build by hiring them in the first place. In the current climate institutions can't afford this, so optimising hiring and promotions is becoming increasingly critical for a university's continued success.

Who invented meringue?

What the invention of complicated foods tells us about discovery, innovation, and university research funding.

Over lunch earlier this week we got talking about how different foods get discovered -- or invented, whichever's the most appropriate model. The point of the discussion was how unlikely a lot of foods are to have actually been created in the first place.

The lineage of some quite complicated foods is fairly easy to discern, of course. Bread: leave out some wet flour overnight and watch it rise to form sourdough. Do the same for malt and you get beer (actually the kind that of beer that in Flanders is called lambic). Put milk into a barrel, load it onto the back of a donkey and transport it to the next town, and you'll have naturally-churned butter. It's fairly easy to see how someone with an interest in food would refine the technique and diversify it, once they knew that the basic operation worked in some way and to some degree.

But for other foods, it's exactly this initial step that's so problematic.

I think the best example is meringue. Consider the steps you need to go through to discover that meringues exist. First, you have to separate an egg -- which is obvious now, but not so obvious if you don't know that there's a point to it. Then you need to beat the white for a long time, in just the right way to introduce air into it. If you get this wrong, or don't do it for long enough, or do it too enthusiastically (or not enthusiastically enough) you just get slightly whiter egg white: it's only if you do it properly that you get the phase change you need. Of course you're probably doing this with a wholly inappropriate instrument -- like a spoon -- rather than a fork or a balloon whisk (which you don't have, because nobody knows there are things that need air beating into them yet). Then you need to determine, counter-intuitively, that making the egg white heavier (with sugar) will improve the final result when cooked. Then you have to work out that cooking this liquid -- which has actually to be a process of drying, not cooking -- is actually quite a good idea despite appearances.

It's hard enough to make a decent meringue now we know they exist: I find it hard to imagine how one would do it if one didn't even know they existed, and furthermore didn't know that beating egg whites in a particular way will generate the phase change from liquid to foam. (Or even know that there are things called "phase changes" at all for that matter.)

Thinking a little harder, I actually can imagine how meringues got invented. In the Middle Ages a lot of very rich aristocrats competed with their peers either by knocking each other off horses at a joust or by exhibiting ever-more-complex dishes at feasts. These dishes -- called subtleties -- were intended to demonstrate the artistry of the chef and hence the wealth and taste of his patron, the aristocrat. Pies filled with birds, exact scale models of castles, working water-wheels made out of pastry, that kind of thing. In order to do this sort of thing you need both a high degree of cooking skill and a lot of unusual food-based materials to work in. You can find these as part of your normal cooking, but it's probably also worth some experimentation to find new and unusual effects that will advance this calorific arms race a little in your favour.

So maybe meringue was invented by some medieval cook just doing random things with foodstuffs to see what happens. The time spent on things that don't work -- leaving pork fat outside to see if it ferments into vodka, perhaps? -- will be amortised out by the discovery of something that's really useful in making really state-of-the-art food. Contrary to popular belief the Middle Ages was a time of enormous technological advance, and it's easy to think of this happening in food too.

So food evolves under the combined effects of random chance operations shaped by survival pressures. Which is exactly what happens in biology. A new combination gets tried by chance, without any anticipation of any particular result, and the combinations that happen to lead to decent outcomes get maintained. At that point the biological analogy breaks down somewhat, because the decent outcomes are then subjected to teleological refinement by intelligent beings -- cooks -- with a goal in mind. It's no longer random. But the initial undirected exploration is absolutely essential to the process of discovery.

Bizarrely enough, this tells us something more general about the processes of discovery and innovation. They can't be goal-directed: or, more precisely, they can't be goal-directed until we've established that there's a nugget of promise in a particular technique, and that initial discovery will only be performed because of someone's curiosity and desire to solve a larger problem. "Blue-skies" research is the starting point, and you by definition can't know -- or ever expect to know -- what benefits it might confer. You have to kiss an awful lot of frogs to have a reasonable expectation of finding a prince, and blue-skies, curiosity-driven research is the process of identifying these proto-princes amongst the horde of equally unattractive alternatives. But someone's got to do it.

The (new) idea of a (21st century) university

What should the university of the 21st century look like? What are we preparing our students for, and how? And how should we decide what is the appropriate vision for modern universities?

There's a tendency to think of universities as static organisations whose actions and traditions remain fixed -- and looking at some ceremonial events it's easy to see where that idea might come from. Looking in from the outside one might imagine that some of the teaching of "old material" (such as my teaching various kinds of sorting algorithms) is just a lack of agility in responding to the real world: who needs to know? Why not just focus on the modern stuff?

This view is largely mistaken. The point of teaching a core of material is to show how subjects evolve, and to get students used to thinking in terms of the core concepts rather than in terms of ephemera that'll soon pass on. Modern stuff doesn't stay modern, and that is a feature of the study of history or geography as much as of computer science or physics. Universities by their nature have a ringside view of the changes that will affect the world in the future, and also contain more than their fair share of maverick "young guns" who want to mix things up. It's natural for academics to be thinking about what future work and social spaces will be like, and to reflect on how best to tweak the student experience with these changes in mind.

What brought this to my mind is Prof Colm Kenny's analysis piece in the Irish Independent this weekend, a response to the recently-published Hunt report ("National strategy for higher education: draft report of the strategy group." 9 August 2010) that tries to set out a vision for Irish 3rd-level (undergraduate degree) education. Although specific to Ireland, the report raises questions for other countries' relationships with their universities too, and so is worth considering broadly.

A casual read of even the executive summary reveals a managerial tone. There's a lot of talk of productivity, broadening access, and governance that ensures that institutions meet performance targets aligned with national priorities. There's very little on encouraging free inquiry, fostering creativity, or equipping students for the 21st century. The report -- and similar noises that have emerged from other quarters, in Ireland and the UK -- feel very ... well ... 20th century.

Life and higher education used to be very easy: you learned your trade, either as an apprentice or at university; you spent forty years practising it, using essentially those techniques you'd been imparted with plus minor modifications; you retired, had a few years off, and then died. But that's past life: future, and indeed current, life aren't going to be like that. For a start, it's not clear when if ever we'll actually get to retire. Most people won't stay in the same job for their entire careers: indeed, a large percentage of jobs that one could do at the start of a career won't even exist forty years later, just as many of those jobs haven't been thought of now. When I did my PhD 20 years ago there was no such thing as a web designer, and music videos were huge projects that no-one without access to a fully-equipped multi-million-pound studio could take on. Many people change career because they want to rather than through the influence of outside forces, such as leaving healthcare to take up professional photography.

What implications does this have for higher education? Kenny rightly points out that, while distance education and on-line courses are important, they're examples of mechanism, not of vision. What they have in common, and what drives their attractiveness, is that they lower the barriers to participation in learning. They actually do this in several ways. They allow people to take programmes without re-locating and potentially concurrently with their existing lives and jobs. They also potentially allow people to "dip-in" to programmes rather than take them to their full extent, to mash-up elements from different areas, institutions and providers, and to democratise the generation and consumption of learning materials.

Some students, on first coming to university, are culture-shocked by the sudden freedom they encounter. It can take time to work out that universities aren't schools, and academics aren't teachers. In fact they're dual concepts: a school is an institution of teaching, where knowledge is pushed at students in a structured manner; a university is an institution of learning, which exists to help students to find and interact with knowledge. The latter requires one to learn skills that aren't all that important in the former.

The new world of education will require a further set of skills. Lifelong learning is now a reality as people re-train as a matter of course. Even if they stay in the same career, the elements, techniques and technologies applied will change constantly. It's this fact of constant change and constant learning that's core to the skills people will need in the future.

(Ten years or so ago, an eminent though still only middle-aged academic came up to me in the senior common room of the university I taught in at the time and asked me when this "internet thing" was going to finish, so that we could start to understand what it had meant. I tried to explain that the past ten years were only the start of the prologue to what the internet would do to the world, but I remember his acute discomfort at the idea that things would never settle down.)

How does one prepare someone for lifelong learning? Actually many of the skills needed are already being acquired by people who engage with the web intensively. Anyone who reads a wide variety of material needs to be able to sift the wheat from the chaff, to recognise hidden agendas and be conscious of the context in which material is being presented. Similarly, people wanting to learn a new field need to be able to determine what they need to learn, to place it in a sensible order, locate it, and have the self-discipline to be able to stick through the necessary background.

It's probably true, though, that most people can't be successful autodidacts. There's a tendency to skip the hard parts, or the background material that (however essential) might be perceived as old and unnecessary. Universities can provide the road maps to avoid this: the curricula for programmes, the skills training, the support, examination, quality assurance and access to the world's foremost experts in the fields, while being only one possible provider of the material being explored. In other words, they can separate the learning material from the learning process -- two aspects that are currently conflated.

I disagree with Colm Kenny on one point. He believes that only government can provide the necessary vision for the future of higher education. I don't think that's necessary at all. A system of autonomous universities can set their own visions of the future, and can design processes, execute them, assess them, measure their success and refine their offerings -- all without centralised direction. I would actually go further, and argue that the time spent planning a centralised national strategy would be better spent decentralising control of the university system and fostering a more experimental approach to learning. That's what the world's like now, and academia's no different.

Why I don’t sign NDAs

Non-disclosure agreements (NDAs) are something I try not to sign, for various reasons. For one thing, they give a false sense of security; for another, they interfere with me doing my job.

An increasing amount of academic research is supported in one way or another by companies, either directly or through co-funding agreements. This trend is only likely to increase as State funding becomes less common. As well as this, academics are sometimes approached to do consultancy or development work for companies on a more close-to-market basis. This can be great for all concerned: the companies get access to (hopefully) unbiased expert advice that can perhaps take a longer-term view, while the academics get real-world problems to work on and a reality check on some of their ideas.

Despite all this, there are still some problems. Chief among them, I've found, are non-disclosure agreements (NDAs) whereby one or both sides agree not to disclose proprietary information to third parties. Some NDAs are one-way, so (for example) the university agrees not to disclose company information; many are symmetrical and protect both sides, which is obviously more desirable. (Interestingly it's often university-led NDAs that are asymmetric, implying that the companies have no intellectual input...) Although they sound like they're used for competitive reasons -- and sometimes they are -- it's more likely that they used to protect information for use in later patent applications, where discussion with a third party might be regarded as "publication" and so endanger the patent. Anyone who works in commercial research has to be sensitive to this, and since I used to run a company myself I'm conscious of protecting others' intellectual property.

So why does this cause a problem? Mainly because of the bluntness of the instrument by which the protection happens.

In my job, I have a lot of discussions that involve half-formed ideas. Many of these seem pretty much to condense out of the ether into several people's heads simultaneously: it's in no way uncommon to have two or three people discuss the same "novel" idea within days of each other. I suppose it's just that there's nothing really new under the sun, and people who share a common technical milieu will often see the same problems and arrive at similar solutions. Often the people involved are students: either undergraduates looking for projects, or PhD students with ideas for theses or papers. These people are my "core constituency" in the sense that my main job is to teach and supervise them in a research-led way.

You can probably see where this is going. Suppose a student comes to me with an idea, and that this idea is related in some way to an idea presented by a company with whom I've signed an NDA. What do I do? Refuse to discuss the matter with the student, even though they're fired-up about it? Try to re-focus them onto another area, even though it might be a great idea, because I can't discuss it? Send them to someone else, even though I might be the right person to supervise the work?

What I can't do is get involved, because however hard I try, I'll never be able to prove that information covered by the NDA had no effect on what I said or did -- or didn't say or do, for that matter. That leaves both me and the university open to legal action, especially if by some chance the student's work got a high profile and damaged the company, for example by developing an open-source solution to something they were working on.

This is something of a dilemma. I like working with companies; I love working with students; and I don't like the feeling that my freedom to discuss technology and ideas is being legally constrained.

I therefore minimise my exposure to NDAs and confidentiality agreements. It's sometimes unavoidable, for example as part of EU project consortium agreements. But as a general rule I don't think NDAs sit well with academics, and there's too much danger of damaging the general openness of research within a university: too much of a sacrifice just to get a single funded project. I'll happily agree to keep information confidential, but the risks of signing a blunt and broad agreement to that effect are just too great.