‘Although text still dominates at the moment, it is possible that it might come to be superseded by image, audio, or even ideogram as the medium of choice.’
Anxiety or opportunity? The Transliteracies conference, held in June 2005 at the birthplace of the Voice of the Shuttle Humanities web-resource (1), dared to stare directly into the future of reading in the digital age and, to its credit, did not blink once. In this article I will describe the issues impacting on reading today, summarise the conference’s deliberations, and outline some ideas for how we might address these opportunities in the UK. My recommendations are also informed by the AHRC’s seminar on E–Publishing in the Arts and Humanities, held in London in May 2005 (2).
The Transliteracies Conference was convened by Alan Liu (3), Professor of English at the University of California at Santa Barbara (UCSB) and founder of the Voice of the Shuttle in late 1994 at a time when his research into the potential of the internet ‘spilled over into the creation of courses, a Web–authoring collective, and a how–to manual titled The Ultrabasic Guide to the Internet for Humanities Users at UCSB (4)- all inspired by the fact that, like the self-aware computer at the end of William Gibson's Neuromancer, I felt rather lonely online.'
Well, he is certainly not lonely now. According to Nielsen//NetRatings’ July 2005 estimate (5), the digital media universe stands at 457,605,522 people. Since the web is still primarily a textual medium it would be reasonable to assume that most of these users can read their own language and many of them can probably read English as well. Far from the web reducing the number of readers, it is giving text the biggest boost since Gutenberg. Unfortunately for some, however, this new literacy is not about reading fixed type, but about reading on fluid and varied platforms – blogs, email, hypertext and, soon, digital paper and all kinds of mobile media in buildings, vehicles, and supermarket aisles. Although text still dominates at the moment, it is possible that it might come to be superseded by image, audio, or even ideogram as the medium of choice. Hence ‘transliteracy’ – literacy across media. Consider, for example, the challenges faced by web developers working in China:
Chinese uses thousands of ideograms. On a computer, they usually are written by typing words phonetically in Roman letters, then using special software to convert them to characters. Making things even more complex, the mainland’s communist leaders simplified many characters after the 949 revolution, while Taiwan, Hong Kong, Singapore and other societies use the old system. So a search engine must sift through twice as many characters. And Chinese is written without spaces between words, making it hard for a machine to figure out where one ends and the next begins. Then there are the quirks of a writing system with a vast literary history, billion modern users and pressure to keep up with technology and international commerce. Baidu.com’s advertising notes that Chinese has 38 ways to say ‘I’ (6).
Western worries about the impact of txt msgs seem petty compared with that. However it is not just a question of format. Thanks to the widespread adoption of virtual learning environments in academia, many colleagues have mastered the art of creating online content, but the real challenge is what you do with it. Much of the discussion at Transliteracies was about exactly this – the growth of social software and its impact on interactivity and collaboration. This issue has not really hit the UK yet but it is on its way, so just as we get used to writing online modules, along come blogs, wikis, Flickr and del.icio.us, all woven together by RSS. What are these exotica? The following notes are adapted from Wikipedia (7), whose anarchic liberalism itself creates yet another challenge:
Wikipedia: A web-based, multi-language, free-content encyclopedia written collaboratively by volunteers and sponsored by the non-profit Wikimedia Foundation. In recent years its philosophy of openness has attracted insults and devotion in equal parts. http://www.wikipedia.org
Blog: A web–based publication consisting primarily of periodic articles (normally in reverse chronological order). May be individual or collaborative. Example: Romantic Circles http://www.rc.umd.edu/blog/ See also vlogs (video blogs)
Wiki: A web application that allows users to add content, as on an Internet forum, but also allows anyone to edit the content. Example: WikiNews http://en.wikinews.org/
Flickr: A digital photo sharing website widely used by bloggers as a photo repository. http://www.flickr.com
del.icio.us: A social software web service for storing and sharing web bookmarks via a method of tagging them with keywords. The result is an organic and ever-growing lacework of shared and interconnected tags. http://del.icio.us/
aggregator: An easy-to-read webpage or application which subscribes to sites which have RSS (8) feeds such as blogs, some websites and mailing lists, and other services, and provides alerts when they are updated. Instead of using bookmarks and mailing lists individually, the user can collect them all in one page for ease of use. Especially good for news and blogs. Example: http://www.bloglines. com/
All of the above combine to create an evolving knowledge network of unprecedented reach. As Dan Gillmor wrote recently in The Financial Times, the networks supported by wikis and blogs are now ‘a key part of a growing, complex global conversation’ (9). They are also increasingly part of the academic conversation because they fit very well into pedagogical traditions of information- sharing. The Transliteracies project has set out to understand this evolving process from the point of view of the reader.
Transliteracies (10) is a collaborative research project involving several campuses of the University of California which was recently awarded a five year grant to conduct research into the technological, social, and cultural practices of online reading. It arose out of UCSB’s earlier Digital Cultures project, which sponsored the June event (11), the first in a series. The conference set out to examine how people today are reading in digital, networked environments. Speakers included a number of researchers who have worked both in literature and in new media – Alan Liu, Jerome McGann, J.Hillis Miller, Leah Price, Bill Warner and others – plus key West Coast computing innovators including Curtis Wong, Group Manager of Microsoft Next Media Research Group, John Seely Brown, former Chief Scientist of the Xerox Corporation, and Anne Balsamo, Director of the Institute of Multimedia Literacy at the University of Southern California.
The structure of the conference was experimental and reflected the familiar tension of the digital – a tight skeleton of pre-designed elements encircled by the less predictable wrapper of human interaction. A blog was set up before the event where participants could read the documentation and begin the discussion, and the conference itself comprised a series of keynote speeches interspersed with three roundtables. These were presented with a series of pre-designed questions, but all returned to the same one: ‘How can reading online be improved? And what do we have to do to get there?’ (12). There is no space here to list all the topics discussed, but as an indication, Roundtable looked at ‘Reading, Past and Present’, including ‘What is the difference between reading and searching, or browsing?’ and, the burning question, ‘How to take a good online text to bed?’ Roundtable 2 considered ‘Reading and Media’, asking ‘In what terms can we discuss the cultural significance, value, and function of reading in the age of new media and multimedia, a moment when multi–sensory immersive experience seems to be privileged?’ Roundtable 3, ‘Reading as a Social Practice’ asked ‘How do reading practices create or de- fine community? Does this work better or worse online than off?’ and ‘Is reading becoming more (or less) social, collective, or collaborative than in the past?’
The final panel featured an experiment whereby members of the audience with wireless laptops were encouraged to blog together parallel to the live discussion. I participated in this and we made a brave attempt but the multitasking proved rather too much for even this tech–literate audience to cope with! The transcript is online along with a dynamic visualisation of the conversation (13).
‘Do academics have to accept that they can no longer control knowledge?
Despite the focus on new technologies, the keynote speech which discussants constantly referred back to focussed not on digital media but upon Galileo, Newton, and the Cambridge Maths Tripos examination. The first speaker, Adrian Johns, of the University of Chicago, used such examples to illustrate that reading does not happen in isolation. He emphasised that whatever the era or technology, texts are presented within a context, to and by individuals who influence their reception for good or bad, and they very often need intermediaries to make them comprehensible. We took these notions as the foundation for our deliberations, and they helped us over the first hurdle of ‘traditional reading’ by making it very clear that reading in any medium has never been simple or transparent. Now a Research Professor at UC Irvine after retiring there from full–time teaching and research, J. Hillis Miller confirmed this, remarking that after a lifetime of teaching he still could never be sure that his students’ experience of the text matched his own, no matter which media were used.
In the medieval period ‘good reading’ was collective and public, and silent reading often provoked suspicion, but as reading became more professionalized certain practices which once were common came to be frowned upon – pointing at the page as one reads, reading aloud, annotating margins, or permitting one’s lips to move during reading. Nevertheless, as Leah Price noted, reading has always disrupted the linear via ‘mining’ practices of tables of contents, indexes, and concordances. And in relation to search engines, Alan Liu reminded us that Walter Ong had asked what would happen if we could never look anything up, but what happens now, he asked, when you can look everything up? Anne Balsamo pointed out that reading has traditionally been part of community life, whereby certain texts are read by all, religious texts being the most obvious example, and Bill Warner offered the notion of the commonplace book as an illustration of ways in which families collected and passed on practical knowledge and techniques. These examples draw clear parallels with contemporary social softwares which also aggregate information without the mediation of a formal editing process. According to Jerome McGann, the traditional university model of semi–private reading within restricted peer groups is bound to be affected by new public models which depend on openness, and this issue is already causing a crisis in the academy and beyond. The Los Angeles Times, for example, recently experimented with a wiki page and quickly closed it down due to ‘abuse’ (14).
Unsurprisingly, we kept returning to the problem of editing. As Walter Bender, Director of MIT Media Lab, emphasised, the more texts we produce the more vital is the role of the editor, and to that end, the Media Lab has recently appointed its first Editor-in-Residence. There was also a great deal of interest in annotations – people wanted to be able to annotate documents but they also wanted access to others’ annotations. UCLA Professor of English and Design/Media Arts, N.Katherine Hayles, expressed optimism about ways in which the Humanities tradition of close reading and paying deep attention to texts could be fruitfully applied to media–specific texts where the materiality of the reading environment is an essential element.
Editing and annotation lead naturally to peer–reviewing, which takes on new dimensions in a world where texts become available in multiple and ever-changing drafts. Although the prospect of so many versions is daunting, there was also a fascination with such complexity and much enthusiasm for developing ways to record and make visible the evolving histories of documents.
The way forward
So, do these developments provoke anxiety or opportunity? How can academics approach the issue of students using collaborative and contributory resources like wikis and blogs? Do they have to accept that they can no longer control knowledge? That soon their books may no longer be read? That semi–private reading in small peer groups will give way to more public community/ open source reading, subject to constant revision and annotation? How can we make use of interactive reading in the seminar room and lecture theatre? Some academics now allow their students to blog live during lectures, with the resulting conversation displayed simultaneously on screen. Will such practice contribute to learning or dilute it? How can we manage online documentation which is added to and edited by many, often unaccredited, authors? How can we track the provenance of such materials? Add our own annotations and read those contributed by others? Develop new processes of editing and peer review?
So many questions and as yet few answers. In May 2005 I attended an AHRC Research Strategy Seminar on E–Publishing in the Arts and Humanities where I was delighted to find a strong commitment from the AHRC to support and encourage experimentation in this area. The following recommendations are drawn from both the Transliteracies conference and the AHRC seminar:
The Transliteracies participants identified a gap between what is actually happening with online reading, and the ability of academics to comprehend it. Rather than trying to control and dictate the direction reading is taking, scholars should study what is actually happening right now. This echoed the opinion of AHRC seminar speaker Martin Richardson, Managing Director, Journals Division, OUP, who stressed that ‘we should learn from the ways in which scholars are already using e–resources, rather than try to push them in other directions’. This includes a recognition of the fact that digital culture is now rubbing against the boundaries between academia and the wider community, and the tools and approaches of transliteracy can both help the public engage with academic research and also assist external scholars to contribute to it.
We need transdisciplinary laboratories where we can study new social practices of reading, synthesise established histories with newer developments, and produce appropriate taxonomies for these new kinds of reading. The Humanities must invest in this process whilst also learning from the expertise of the science community. And, crucially, implementation of transdisciplinarity means working within our own universities to make boundaries more porous not just metaphysically but practically, in financial and management terms too, because transliteracy operates best in a flexible environment.
Academic blogging by both staff and students should be encouraged because it reflects the reality of how knowledge constantly grows, shifts and develops. Fixed type has never been able to keep up with the naturally fluid and organic ways in which we learn, teach and research. At the AHRC seminar, Michael Jubb, Director of the Research Libraries Network, asked ‘why shouldn’t we encourage academic blogging – especially in areas currently covered by academic journals?’ and enquired whether short–run print monographs ‘are really a sensible way to communicate with a relatively small group of peers?’
Experiments in e-publishing are vital and must be undertaken as widely as possible. Failures should be studied as closely as successes. This is an embryonic field and every faltering step adds to our knowledge. At the AHRC seminar, Paul Ayris, Director of Library Services at UCL, and chair of SHERPA (15) said that monographs are not sustainable, and e-books are unpopular across any discipline. He advocated a combination of web format with Print on Demand ‘for those needing paper’. We certainly must continue to insist that academics who become fixated on PDFs are missing the point. There are important opportunities for combining e-publishing with the kinds of interactivity described above, and in such situations PoD will quickly become obsolete. (Although it might be replaced by ‘Burn on Demand’ for DVD formats, or similar.)
David Robey, Director of the AHRC’s ICT in Arts and Humanities Research Programme, insisted that ‘there is no contradiction between peer–reviewing and open access’. There are, however, many questions about how peer reviewing might operate in a transliterate environment, especially with regard to quality. The level of anxiety around this issue means that there is little motivation to experiment with peer review, especially in relation to creating a scholarly structure for the creative and performing arts and new media. But like Adrian Johns we must transit time as well as media, exploring historical as well as future perspectives, and experiment with practices from scientific disciplines, such as the circulation of pre-prints for feedback, and perhaps the revival of early types of perusal.
Connection between e–learning and e–publishing
This issue arose at both events, but in each case as something of an afterthought. And yet there are so many overlaps between the two. I would guess that the people who engage with e-learning are the same people who have an enthusiasm for e-publishing, and of course anyone who creates e-learning materials is ‘e-publishing’ in some sense. Any divide comes not only from the teaching/research split, of whose iniquities we are all aware, but also from the apartheid which occurs in many institutions between the technicians and programmers who build and support the systems and increasingly find themselves labelled as ‘e-learning support’, and the academic researchers who may be perceived to operate on a somewhat more exalted level. It is our responsibility to change this destructive anomaly.
At De Montfort University, transdisciplinarity is at the core of a number of new ventures. The Centre for Creative Technologies, founded by Professor of Music Technology, Andrew Hugill, is due to open in 2006 and will form the heart of collaborative digital research across the university. In the Faculty of Humanities I am working with Kate Pullinger, Simon Mills, and other colleagues to develop courses which are deeply informed by transliteracy, including a PG Dip in Publishing and New Media and an online MA in Creative Writing and Technology (http://writing.typepad.com/cwt/). My own research is moving forward with a focus on narratives in digital contexts, most recently with the new Writing and the Digital Life collaborative blog and email list (http://writing. typepad.com) plus other projects in the pipeline. Transliteracy is about openness – across disciplines, communities, cultures and countries. There are great opportunities for those with the courage to descend from the safety of the (offline and somewhat crumbling) ivory tower.
3. Liu is also the author of The Laws of Cool: Knowledge Work and the Culture of Information (University of Chicago Press, 2004), an insightful study of the role of new technologies in connecting the humanities with the business world.
4. Excerpt from Alan Liu, ‘Globalizing the Humanities – Voice of the Shuttle: Web Page for Humanities Research, Humanities Collections 1, no. 1 (1998): 41–56 (http://vos.ucsb.edu/excerpt.asp).
6. China Search Engine Baidu.com Set For IPO’ 10e20
(http://www.10e20webdesign.com/news/news_center_latest_technology_internet_news_0 _august_05_China_Search_Engine_Baidu_ com_Set_for_IPO.htm).
13. http://hybrid.ucsc.edu/Agonistics/Transliteracies/Interface/agon1.html (needs to be left to run for a while to see how the conversation builds up).
14. LA Times closes wiki’, US Politics About.Com, 19 June 2005 (http:// uspolitics.about.com/b/a/178945.htm).