Skip to main content

Posts about Blog (old posts, page 19)

Being a student with open courseware

The move towards massively open on-line courseware (MOOCs) has the potential fundamentally to change the practice and experience of university. But what does it feel like to study on a MOOC-delivered course?

No-one in academia can have been indifferent to the influence the internet has had on their teaching and research over the past decade, but MOOCs are a significant development nonetheless. They represent an attempt to "unbundle" education from the traditional providers, reducing barriers to entry and costs to students while facilitating alternative styles of learning and attendance not always supported in existing institutions. It's impossible to generalise about how universities deliver their material, but it's fair to say that the ability to study a bespoke programme at one's own speed and from whatever location is convenient is a significant departure from the usual three-to-four-year resident degree programme. (See, for example, Salman Khan. What college could be like. Communications of the ACM 56(1). January 2013.) Like most other research-led institutions, St Andrews is exploring on-line learning and MOOCs -- in our case as part of the FutureLearn initiative. But planning a radical style-change in teaching doesn't really work if it's purely "from the top", as it's the experience "from the bottom", as a student, that will dictate whether an approach succeeds or fails.

So I signed up to take a MOOC module. We're only in Week 3 (of 12) at present, but it's worth making some early observations.

To avoid embarrassing the providers, I won't identify the lecturer or module. Suffice to say it's being delivered on one of the three major platforms for MOOCs; is delivered by a tenured academic at one of the world's top universities; is a science subject, not a humanity; and is not in computer science or mathematics, to avoid the risk of my subject expertise getting in the way.

The experience of signing up is as simple and straightforward as one would expect from a modern web provider, with each module being given its own home page containing links to all the videoed lecture material, practicals, class tests, and a discussion forum. Clearly a lot of work has gone into preparing all the material: around 30 hours of video (with closed captioning) plus materials for formative and summative assessment: it's a substantial investment of time by all concerned.

Interestingly, this particular module is being run by the originating university as an experiment, to see how best they can deploy on-line learning. The student cohort includes both on-line students and students resident on campus for a "traditional" programme -- with the former outnumbering the latter by two orders of magnitude. Both sets are taking the same classes and assessments, but the resident students can also avail of a lecture each week "on demand" on any subject they feel needs more attention. This could be seen as a safety net in case the on-line experience is less than satisfactory, but also as a feedback loop to refine the module for next time: without this, it'd be harder to work out which parts needed more explanation, as always happens.

The main course material is divided into units with about two hours of video material each, split into convenient sections which can be watched on-line or downloaded. The partitioning is convenient both for students and (I suspect) for the lecturer, who can record them in shorter "takes" to simplify correction and re-recording. The videos as typically either the lecturer as a "talking head" against a plain background, or a slide set onto which the lecturer writes additional notes as he talks. (Sometimes a single video piece includes both styles.) Having handwriting appear works better than might be expected: it keeps the human connection, is less mechanical and more engaging than might be the case with purely printed slides. Some sections also include in-line multiple-choice questions for students to answer to check understanding during the lecture.

Having a lecture on video has a number of advantages, not least the ability to pause and repeat arbitrarily. I found this useful for note-taking, in that I could listen to a section, pause it to write summary notes, and not miss what the lecturer said next. A more experienced student note-taker might be able to multi-task enough for this not to matter. Note-taking is important for its own sake, of course, as it activates other parts of your brain than listening alone and makes the material much more engaging and likely to stick. Repetition has been less useful, which may just be a reflection of the level of the material so far, and a non-native English-speaker would probably find it invaluable.

The module assessment includes some labs, which to get the whole class involved had to occur at set times: 9pm UK time was bad enough, but of course Indian and Chinese students had it much worse! However, the lab software didn't work: I think it simply couldn't handle the demand of the student numbers, although it was hard to tell for sure from my outsider's viewpoint. The lecturer ended up dropping the entire lab component and changing the rubric, which seemed substantially easier than the same task would have been in St Andrews, where it would have raised questions about the fairness of assessment. In this case it probably reflects teething troubles, but it does highlight some of the concerns that have been raised about the standard of examination (and consequently the value of certification) from MOOC-delivered courses: does the module actually still make sense without the labs with which it was designed?

The remaining assessment consists of problems sets and an examination. All of these have deadlines, but with a sliding scale of lateness penalty (up to a fortnight) within which students can still submit. This seems reasonable for an on-line module, especially when one has no visibility of, or expectation about, students' personal circumstances. The marking overhead for assessment will be substantial, however, as the problem sets aren't all multiple-choice -- although they do show signs of being designed for large-scale grading. Coming from a university with small classes (less than a hundred is the norm, less than twenty not exceptional), this would require a significant change in my style of assessment -- although to be fair, no university is set-up for class sizes in the tens of thousands that could easily attend a MOOC. This remains a major academic and technical challenge.

The discussion forum was a particularly interesting experience. There are a number of teaching assistants as well as the lecturer monitoring the discussions, answering a guiding questions, and students get credit for their engagement in discussions. Some students were noticeably more active, offering a range of different experiences and levels of expertise. Some clearly know enormously more about the subject matter than has been taught, possibly students of this area trying to supplement their experience with modules from another -- perhaps more prestigious -- university. Others, it rapidly became clear, are actually academics doing exactly what I'm doing: attending the class for interest, but really studying the MOOC and its experience. As I said, MOOCs are not an area any institution is willing to let pass by.

Overall the experience is broadly positive, and I'm not concerned about any of the technological challenges posed: they're simply those of any large, distributed computer system. But I think at least four questions arise about how to do this sort of learning well.

Firstly, it's clear that there's an immense amount of work involved at the back end of any MOOC, in terms of its design and preparation but also in terms of timetabling, support and -- especially -- assessment. Unless one is willing to accept multiple-choice questions or other approaches that can be mechanised it's hard to see how this can be avoided, and it's at a scale that few if any first-tier institutions have experience with.

Secondly, the feeling of a class community is notably absent, and discussion boards can't make up for face-to-face interaction and live discussion. Especially in a small institution like St Andrews, the student experience -- of a small, closely-knit class in daily close contact with lecturers, demonstrators, and the wider student body -- is a major attraction, one that is highlighted by our teaching fellows and in all our student surveys, and one that would be hard to match in a distance setting. Perhaps this is simply academic snobbery: the cost of such an experience may be prohibitive for a progressively larger fraction of potential students. On the other hand, its pedagogical advantages are unquestionable. If we simply accept the difference, we run the risk of consigning a large group of students to a study experience we know to be sub-standard -- and indeed have designed to be such. Are there ways to provide an equivalent, if not better, experience? This has to be a major topic for future research, and has technological and sociological dimensions.

Thirdly, distance interrupts the feedback loops that usually exist between a lecturer and their material, and between lecturer and class. In a local setting, feedback is generally immediate and clear: you can tell pretty much instantly if a class isn't engaged with or understanding the material, and adjust accordingly. In a distance setting, feedback is indirect and time-delayed, moderated by students' willingnesses to criticise or complain about their lecturers, which I would expect to complicate the refinement of a module that usually happens. Again, designing a better experience is probably not an insoluble problem, but it is an urgent one.

Finally, there's the issue of coherence in a system in which one can make arbitrary module choices. I'm a great believer in broad curricula and plenty of options, but one often wants to get a degree in something, and this requires structuring either for professional or academic reasons. Without such structure, certification of what the student has learned, and how well, become a lot more difficult. But even beyond this, there's a problem of the embarrassment of choice, and of not knowing what one should study to accomplish whatever learning goal one has. This is a function often provided by advisers, counsellors and other staff within a university, who can help students understand what to study and why they're going to university in the first place. Their function is easily forgotten in distance learning, but seems to me to be vital for successful learning outcomes and eventual student satisfaction. It certainly suggests that the notion of distance learning needs to be broadened significantly beyond the content if it's to offer a real alternative to a "normal" degree: indeed, the content sometimes feels almost irrelevant, given the avalanche of material available on the web outside the MOOC universe. Curation, direction and socialisation feel like far more dominant challenges.

When I decided to conduct this experiment, I thought it would draw me into the technology of MOOCs. It didn't: the technology is well done, but unexceptional. The content is great, but again no more impressive than any number of other sources on the internet. (Admittedly a sample size of one is hardly conclusive….) The sociology, however, is another matter: there's clearly a need to study the ways in which people experience learning through computers, at a distance, and in distributed groups across which one wants to establish a cohort effect, and how the universities can leverage their expertise in this into the on-line world. It'll be interesting to see what happens next.

Hypothetical adventures in your chosen field

A discussion on the different styles of PhD thesis that might be appropriate for different students and research areas.

I was talking to one of my students about the "shape" of their PhD, and how we might present the work they're doing in the final thesis. This particular student is working on the science of complex coupled networks, which is a very new area of network science -- itself a pretty new field. The exact details of her work aren't important: the important point is that it's an area that's so poorly understood that it's hard to ask meaningful detailed questions about it.

A PhD is a training in research: the work done in the course of one doesn't usually change the world, and a fact that's often forgotten is that they're not supposed to. The goal of a PhD is to demonstrate that the candidate can conceive of, plan, conduct, evaluate a communicate a programme of research, sustained over typically three or four years, that adds something to human knowledge. Clearly there are many ways one can demonstrate this, and the question we were asking was: what are the variations? How do we choose which is appropriate?

A typical thesis is a monograph, written specially to describe the research, its process and results. In this it's very different from a scientific paper, which typically focuses on a single result and often elides a lot of the motivation and process. Publications along the way always help, of course: they let a reader (or examiner) know that the work being presented has been reviewed by others in the field, which always gives a certain amount of confidence that it contains worthwhile results. (Few things make a PhD examiner's heart sink more than a thesis with no associated publications. It means you're on your own in terms of assessing every aspect of the work.) The monograph needs a "shape", a skeleton upon which to hang the research, as well as clearly-enumerated contributions to knowledge and an understanding of its own boundaries.

There are however a couple of different styles of skeletal structure.

The hypothesis. This is my favourite, truth be told. The work is encapsulated in a question, and the rest of the thesis seeks to answer this question. The answer might be yes, no, or maybe, all of which can be valid and can result in the award of a degree. The research then proves the hypothesis, disproves it, narrows it down as incomplete, or uncovers extra structure that would require substantially more work to explore -- or some combination of these.

There are several reasons to like this style of thesis. Firstly, the hypothesis is usually something concrete within the field of study. (It may seem hopelessly abstract to anyone outside the field, of course.) That makes it easy for an examiner or other reader to relate to: the question makes sense to them, as something one would want to know the answer to. Secondly -- and more importantly -- you have a pretty clear goal and consequently a pretty good notion of when you're finished. A PhD can almost always go on for ever, of course, but having a tight hypothesis means that you have a reference against which to test the work and decide whether it's sufficient to convince the reader of your conclusions.

Against this we may contrast the second type of thesis:

Adventures in your chosen field. In this kind of thesis, there is no hypothesis. Instead, the thesis marks-out an area of interest and explores it, with a view to understanding or characterising some or all of it but without any driving question that demands an answer. Unlike the hypothesis, this kind of thesis isn't really "for" anything: it's intended as an exploration, a search for features -- and indeed possibly for hypotheses to be tested in the future. But this style of thesis is perfect for some areas, like mathematics and the more abstract end of computer science, where something new needs to be explored for a while to see if it makes sense and what is has to offer. (It's also found in non-monographical theses, where some universities allow candidates to submit a portfolio of published papers in place of the traditional approach. I must say I always find theses like these quite disjointed and fundamentally dissatisfying: there's a lot to be said for a purpose-written, rounded monograph as a conclusion to a programme of research.)

One would probably have to admit that an "adventures" thesis is a little more challenging to examine, since it doesn't motivate its own intention as clearly as a "hypothetical" thesis. It's going to be harder for the candidate to decide when it's ready, too, since one can always do just one more search to find something else of interest. In that sense, an "adventurer" needs to be more disciplined both in deciding when enough is enough and in deciding what, of the many possible choices, constitute the interesting pathways to explore.

I don't think either style is inherently preferable. It depends entirely on the subject matter, and one could easily fall into the mistake of trying to construct a hypothesis for work that was really an "adventure", or indeed not constructing one for a thesis that really needed one to motivate and structure it. There's a clear difference between an exploratory thesis and a rambling one: the exploration needs structure and coherence.

It's also perhaps worth considering the psychologies and research styles of different students. Some students need concrete goals: to build something, to test something, to prove or disprove a conjecture. These people clearly benefit from doing "hypothetical" theses. Other students have an interest in an area rather than in any specific outcome, and so might prefer an "adventure". There's no point in forcing one into the other, any more than one can get a theoretically-inclined student motivated by coding (and the associated debugging): it's just not what floats their boat, and it's better that they know their own minds, strengths and interests than to spend three or four years engaged in a quest to which they're fundamentally misaligned.

Distinguished Dissertations competition 2013

The British Computer Society's distinguished PhD dissertations competition is now accepting nominations for 2013.

The Conference of Professors and Heads of Computing (CPHC), in conjunction with BCS and the BCS Academy of Computing, annually selects for publication the best British PhD/DPhil dissertations in computer science and publishes the winning dissertation and runner up submission on the BCS website.

Whenever possible, the prize winner receives his/her award at the prestigious BCS Roger Needham Lecture, which takes place annually at the Royal Society in the autumn.

After a rigorous review process involving international experts, the judging panel selects a small number of dissertations that are regarded as exemplary, and one overall winner from the submissions.

Over forty theses have been selected for publication since the scheme began in 1990.

More details, including links to some past winners, can be found on the competition web site. The closing date for nominations is 1 April 2o13.

Six 600th anniversary PhD scholarships

As part of St Andrews' 600th anniversary, the University is funding six scholarships in computer science (and all our other Schools, for that matter).

Details are on our web site. There's no special application process, and the studentships are open to all qualified applicants. You'd be well advised to research potential research supervisors and discuss things with them ahead of applying. Most staff -- including me -- will be accepting students under this scheme.

Chief scientific advisor for Ireland

I'm a little perplexed by the direction of the current discussion over Ireland's chief scientific advisor.

The need for scientific advice, as a driver of (and sanity check upon) evidence-based policy formation, has arguably never been greater. Certainly many of the challenges we face living in the 21st century are, directly or indirectly, influenced by a sound understanding of both the science and its limitations. This is why it's attractive for governments to have dedicated, independent scientific advisors.

We need first to be clear what a good chief scientific advisor isn't, and that is an oracle. The chief scientist will presumably be a trained, nationally and internationally respected practitioner who brings to the job an experience in the practice of science but also in its wider analysis and impact. In many respects these are the qualities one looks for in a senior academic -- a full professor -- so it's unsurprising that chief scientists are often current or former (emeritus) professors at leading institutions. They will not be expert on all the areas of science required to address any particular policy question -- indeed, they might never be expert in the details pertaining to any question asked -- but they can act as a gateway to those who are expert, and collate and contrast the possibly conflicting views of different experts who might be consulted in each case. They provide a bridge in this sense between the world of the research and the world of policy, and will need to be able to explain clearly the basis and consequences of what the science is saying.

Ireland has recently abandoned having a chief scientist as an independent role, and has instead elected to combine it with the role of Director of Science Foundation Ireland, the main advanced research funding agency. There are several stated reasons for this, most centring on the current resource constraints facing government spending.

I don't think this structure is really consistent with the role of chief scientist, nor with the principles of good governance.

To avoid any misunderstandings, let's be clear that this has nothing to do with the individuals involved: the current or former directors of SFI would be eminently suited to be a chief scientist in their own right. However, having the SFI director fill this additional role ex officio seems not to be best practice. The concern must be that the combined role cannot be independent by its very nature, in that the scientific direction of SFI may wholly or in part be involved in the policy decisions being made as a consequence of advice received. If this occurs, the chief scientist is then recommending actions for which the director must take responsibility, and the perception of a confusion of interest is inevitable if these roles are filled by the same individual. To repeat, the integrity of the office-holders is not the issue, but rather a governance structure that conflates the two roles of execution and oversight.

If resource constraints are really the issue, one might say that the chief scientist does not need to be independently employed to be independent in the appropriate sense. The chief scientific advisory roles in the UK, for example, are typically filled by academics on part-time release from their host institutions. They collate and offer scientific advice, possibly made better by the fact that they remain active researchers and so remain current on both the science and its practice, rather than being entirely re-located into the public service. (The chief scientific advisor for Scotland, for example, remains a computer science researcher at the University of Glasgow in addition to her advisory role.) The risk of confusion is significantly less in this structure, because a single academic in a single institution does not exert executive control or influence over wider funding decisions. Moreover the individual remains employed by (and in the career and pension structure of) their host university and is bought-out for part of their time, which reduces the costs significantly. It also means that one can adjust the time commitment as required.

I thought when the post was first created that it was unusual that the Irish chief scientist's post was full-time: requiring the (full-time) SFI director to find time in addition for the (presumably also full-time) duties of chief scientific advisor is expecting a lot.

It is of course vital for the government to be getting good science advice, so it's good that the chief scientist role is being kept in some form. But I think it would be preferable to think about the governance structure a little more, to avoid any possible perception of confusion whether or not such confusion exists in practice.