Over the weekend there was a fascinating exchange of viewpoints in the Wall Street Journal taking opposing sides of the argument as to what effect the internet is having on us: is it making us smarter and better-informed, or more shallow and un-disciplined compared to our book-reading days? Perhaps more importantly. is there anything we can do to leverage the internet to promote smartness more effectively?
Clay Shirky’s article takes the positivist approach, arguing that amid the spam and the videos of people tripping over chairs are the beginnings of a new media culture that’s attuned to writing for a non-linear, multimedia experience. Opposed to this is Nicholas Carr’s view that the internet is opposed to the “stillness” that books encourage, and that the mental discipline that attends reading is of value in itself.
This is a critical (indeed, perhaps the critical) cultural argument over the “content revolution” that we’re currently in the middle of. As a computer scientist with a deep interest in writing and literature, I find it fascinating that computers are at the forefront of a societal change, just as they’re at the forefront of scientific change. I think we can both critique the points made by both authors, and also use them to move on from what risks being a sterile discussion to consider what computers have to offer in terms of literature and writing.
It’s perhaps unsurprising that overall I take Shirky’s position: the internet is mind-expanding, and the mind-blowing volume of mediocrity created in the process doesn’t alter that. It’s undoubtedly true that there is an amazing amount of trivial and/or self-serving junk on the web (probably including most of what I write), as well as material that’s offensive and/or dangerous. The same is true for print.
Carr’s argument seems to me to centre around a critique of hyperlinking rather than of the web per se, and it’s hard to argue with the suggestion that rapidly clicking from page to page isn’t conducive to deep study or critical understanding. It’s also clearly true that this sort of frenetic behaviour is at least facilitated, if not encouraged, by pages with lots of links such as those found on Wikipedia and many news sites. There’s a tendency to encounter something one doesn’t know and immediately look it up — because doing so is so easy — only to do the same thing with this second page, and so on. On the other hand, few things are less rewarding than keeping reading material for which one doesn’t have the background, whose arguments will never make sense and whose content will never cohere as a result. Hyperlinking makes such context readily available, alongside a potentially destabilising of loss of focus.
It’s important to realise that such distraction isn’t inevitable, though. When reading Carr’s article I was reminded of a comment by Esther Dyson (in another context) to the effect that the internet is simply an amplifier that accentuates what one would do anyway. Deep thinkers can use hyperlinking to find additional information, simplify and their learning and generally enrich their thinking; conversely, shallow thinkers can skim more material with less depth. I think there’s an unmistakable whiff of cultural elitism in the notion that book-reading is self-evidently profound and web-page-reading necessarily superficial.
It’s tempting to suggest that books better reflect and support a shared cultural experience, a value system that’s broadly shared and considered, while the internet fosters fragmentation, ill-considered and narrowly-shared sub-cultures. I suspect this suggestion of broadly true, but not in a naive cause-and-effect way: books cost money to print and distribute, which tends to throttle the diversity of expression they represent. In other words, there’s a shared cultural space because that’s all people were offered. Both the British government and the Catholic church maintained a list of censored and banned books that effectively limited the space of public discourse through books. Both systems survived until recently: the Index Liborum Prohibitorum was only abolished in 1966 (and hung around for longer than that in Ireland), and the British government domestically banned Spycatcher in the 1980s.
What may be more significant than hyperlinking, though, is closed hyperlinking and closed platforms in general. This is a danger that several writers have alluded to in analysing the iPad. The notion of curated computing — where users live in a closed garden whose contents are entirely pre-approved (and sometimes post-retracted, too) — seems to me to be more conducive to shallow thinking. Whatever else the open internet provides, it provides an informational and discursive space that’s largely unconstrained, at least in the democratic world. One can only read deeply when there is deep material to read, and when one can find the background, context and dissenting material against which to check one’s reading. To use Dyson’s analogy again, it’d be easy to amplify the tendency of people to look for material that agrees with their pre-existing opinions (confirmational bias) and so shape the public discussion. There might be broad cultural agreement that Mein Kampf and its recent derivatives should be excluded in the interests of public safety, but that’s a powerful decision to give to someone — especially when digital technology gives them the power to enforce it, both into the future and retroactively.
(As an historical aside, in the early days of the web a technology called content selection was developed, intended to label pages with a machine-readable tag of their content to enable parental control amongst other things. There was even a standard developed, PICS, to attach labels to pages. The question then arose as to who should issue the labels. If memory serves me correctly, a consortium of southern-US Christian churches lobbied W3C to be nominated as the sole label-provider. It’s fair to say this would have changed the internet forever….)
But much of this discussion focuses on the relationship between the current internet and books. I suspect it’s much more interesting to consider what post-book media will look like, and then to ask what might make such media more conducive to “smart study”. There are shallow and simple changes one might make. Allowing hyperlinks that bring up definitions of terms in-line or in pop-ups (as allowed by HyTime, incidentally, a far older hypertext model than the web), would reduce de-contextualisation and attention fragmentation. I find tools like Read It Later to be invaluable, allowing me quickly to mark pages for later reading rather than having to rely on memory and the inevitable cognitive load, especially on mobile devices. Annotating pages client-side would be another addition, on the page rather than at a separate site. More broadly, multimedia and linking invite a whole new style of book. The iPad has seen several “concept” projects for radically hyperlinked multimedia works, and projects like Sophie are also looking at the readability of hypermedia. Unsurprisingly a lot of the best work is going on within the Squeak community, which has been looking at these issues for years: it has a rich history in computer science, albeit somewhat outwith the mainstream.
I doubt the internet can ever make someone smarter, any more than it can make someone careful. What it can do is facilitate new ways of thinking about how to collect, present, organise and interact with information in a dynamic and semantically directed fashion. This is definitely an agenda worth following, and its great to see discussions on new media taking place in the general wide-circulation press and newspapers