For Ted Nelson
I: Centrifugal Force
We live in the age of networks. Wherever we are, five billion of us are continuously and ubiquitously connected. That’s everyone over the age of twelve who earns more than about two dollars a day. The network has us all plugged into it. Yet this is only the more recent, and more explicit network. Networks are far older than this most modern incarnation; they are the foundation of how we think. That’s true at the most concrete level: our nervous system is a vast neural network. It’s also true at a more abstract level: our thinking is a network of connections and associations. This is necessarily reflected in the way we write.
I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982. Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts. Like it or not, we implicitly reference other texts with every word we write. It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts. It’s the secret to our success. Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth. He never got it. But Nelson did influence a generation of hackers – Sir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.
As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected. While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure. Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone. Do we answer? Do we click and follow? A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost. The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space. The more heavily linked a particular hypertext document is, the greater this pressure.
Consider two different documents that might be served up in a Web browser. One of them is an article from the New York Times Magazine. It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links. Many of these links point back to other New York Times articles. This article stands alone. It is a hyperdocument, but it has not embraced the capabilities of the medium. It has not been seduced. It is a spinster, of sorts, confident in its purity and haughty in its isolation. This article is hardly alone. Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ. We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized. Every link presents an escape route, and a potential loss of income. Hence, links are kept to a minimum, the losses staunched. Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium. The tone has been set.
On the other hand, consider an average article in Wikipedia. It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links. Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web. This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage. Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention. Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.
Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug. Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly. If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site. In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads. In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it. This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden. It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.
This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents. It is most obvious in the way we now absorb news. Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper. Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on. We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way. The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole. Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.
The newspaper as we have known it has been shredded. This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext. We are the ones who feel the lure of the link; no machine can do that. Newspapers made the brave decision to situate themselves as islands within a sea of hypertext. Though they might believe themselves singular, they are not the only islands in the sea. And we all have boats. That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.
The lure of the link has a two-fold effect on our behavior. With its centrifugal force, it is constantly pulling us away from wherever we are. It also presents us with an opportunity cost. When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment. If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it. Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this? Does my need and interest outweigh all of the other demands upon my attention? Can I focus?
In most circumstances, we will decline the challenge. Whatever it is, it is not salient enough, not alluring enough. It is not so much that we fear commitment as we feel the pressing weight of our other commitments. We have other places to spend our limited attention. This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”. It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.
The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span. Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days. Instead, attention has entered an era of hypercompetitive development. Twenty years ago only a few media clamored for our attention. Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention. Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.
The most obvious effect of this hypercompetitive development of attention is the shortening of the text. Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment. More and more, our diet of text comes in these ‘bite-sized’ chunks. Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything. The truth is more complex. Our diet will continue to consist of a mixture of short and long-form texts. In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form. It is digestible. But it need not be vacuous. Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material. They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit. Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel: shorter the text, the less invested you are. You give way more easily to centrifugal force. You are more likely to navigate away.
There is a cost incurred both for substance and the lack thereof. Such are the dilemmas of hypertext.
II: Schwarzschild Radius
It appears inarguable that 2010 is the Year of the Electronic Book. The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market. Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package. Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world. The electronic book is an inevitability.
At this point a question needs to be asked: what’s so electronic about an electronic book? If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar. Too familiar. This is not an electronic book. This is ‘publishing in light’. I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning. An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters. It doesn’t even necessarily begin with that translation. Instead, first consider the text qua text. What is it? Who is it speaking to? What is it speaking about?
These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light. That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated. They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium. This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader. Nor does it make the electronic book an intrinsically alluring object. That’s an interesting point to consider, because hypertext is intrinsically alluring. The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text. If an electronic book does not offer a new relationship to the text, then what precisely is the point? Portability? Ubiquity? These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring. This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring. At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.
Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium. But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe. Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,. For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.
What does the electronic book look like? Does it differ at all from the hyperdocuments we are familiar with today? In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text. All of these are immediately applicable to the electronic book. The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored. The printed volume took nearly fifty years to evolve into its familiar hand-sized editions. Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book. We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been. Over the next few years, our innovations will surprise us. We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.
The electronic book will not be immune from the centrifugal force which is inherent to the medium. Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument. Yet we come to books with a sense of commitment. We want to finish them. But what, exactly do we want to finish? The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does. So does an electronic book have a beginning and an end? Or is it simply a densely clustered set of texts with a well-defined path traversing them? From the vantage point of 2010 this may seem like a faintly ridiculous question. I doubt that will be the case in 2020, when perhaps half of our new books are electronic books. The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book. There is no way that the electronic book can remain apart, indifferent and pure. It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader. More of a gradient than a boundary.
It remains unclear how any such construction can constitute an economically successful entity. Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe. The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen. This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear. Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.
We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books. (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole. Once you’re on the wrong side you’re doomed to fall all the way in.) On one side – our side – things look much as they do today. Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical. On the other side, electronic books rapidly become almost completely unrecognizable. It’s not just the financial model which disintegrates. As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform. The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words. Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each. The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could. Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both. This will completely confound our expectations of linearity in the text.
We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers. We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on. This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it. Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once. Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it. With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.
What ever happened to the book? It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever. This is not an ending, any more than birth is an ending. But it is a transition, at least as profound and comprehensive as the invention of moveable type. It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book. Transitions are chaotic, but they are also fecund. The seeds of the new grow in the humus of the old. (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)
III: Finnegans Wiki
So what of Aristotle? What does this mean for the narrative? It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts. But what about stories? From time out of mind we have listened to stories told by the campfire. The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale. For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.
Will we lose all of this? Can narratives stand up against the centrifugal forces of hypertext? Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form. The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind. There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot. A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force. We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum. It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.
This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books. Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet. How can any text hope to stand against that?
And yet, some do. Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series. Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination. None of this is high literature, but it is literature capable of resisting all our alluring distractions. This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc. We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad. That is one direction, a direction literary publishers will pursue, because that’s where the money lies.
There are two other paths open for literature, nearly diametrically opposed. The first was taken by JRR Tolkien in The Lord of the Rings. Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating. Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it. And although readers do finish the book, in a very real sense they do not leave that universe. The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations. Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books. Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy. Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers. This is another direction for the book. While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it. (Some argue that this is the secret of JK Rowling’s success.)
Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext. There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures. But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas. The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext. That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power. It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite. The text is overloaded with meaning, so much so that the mind can’t take it all in. Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant. But there is another possibility. In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed. In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example. As texts become electronic, as they melt and dissolve and link together densely, meaning multiplies exponentially. Every sentence, and every word in every sentence, can send you flying in almost any direction. The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.
It has been said that all of human culture could be reconstituted from Finnegans Wake. As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture. Everything will be there, all strung together. And that’s what happened to the book.
I: The Golden Age
In October of 1993 I bought myself a used SPARCstation. I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration. I also had some ideas about coding networking protocols for shared virtual worlds. Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet. Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.
I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference. I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there. Not enough content to make it really interesting. The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968. Without sufficient content, hypertext systems are fundamentally uninteresting. Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage. To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive. Either everything is connected, or everything is useless.
In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party. The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing. Over the course of the last week of October 1993, I visited every single one of those Websites. Then I was done. I had surfed the entire World Wide Web. I was even able to keep up, as new sites were added.
This gives you a sense of the size of the Web universe in those very early days. Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites. Yet even so, the Web had the capacity to suck you in. I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly. This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it. At its essence, the Web is personally seductive.
I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party. This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco. People would come to perform, create, demonstrate, and spectate. I decided I would show these people this new-fangled thing I’d become obsessed with. So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?” They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject. They’d be delighted, and begin to explore. At no point did I say, “This is the World Wide Web.” Nor did I use the word ‘hypertext’. I let the intrinsic seductiveness of the Web snare them, one by one.
Of course, a few years later, San Francisco became the epicenter of the Web revolution. Was I responsible for that? I’d like to think so, but I reckon San Francisco was a bit of a nexus. I wasn’t the only one exploring the Web. That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm? How about you type in ‘www.hotwired.com’?” Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online. Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything. I didn’t have to tell Steuer, and he didn’t have to tell me. We knew. And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.
That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire. It was, and is, all things to all people. This makes it the perfect love machine – nothing can confirm your prejudices better than the Web. It also makes the Web a very pretty hate machine. It is the reflector and amplifier of all things human. We were completely unprepared, and for that reason the Web has utterly overwhelmed us. There is no going back. If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.
In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web. Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide. The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection. The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference. We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.
This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible. But this age is drawing to a close. Two recent developments will, in retrospect, be seen as the beginning of the end. The first of these is the transformation of the oldest medium into the newest. The book is coextensive with history, with the largest part of what we regard as human culture. Until five hundred and fifty years ago, books were handwritten, rare and precious. Moveable type made books a mass medium, and lit the spark of modernity. But the book, unlike nearly every other medium, has resisted its own digitization. This year the defenses of the book have been breached, and ones and zeroes are rushing in. Over the next decade perhaps half or more of all books will ephemeralize, disappearing into the ether, never to return to physical form. That will seal the transformation of the human cultural project.
On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate. Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not. The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us. It presents the Web as a surface, nothing more. iPad is a portal into the human universe, stripped of everything that is a computer. It is emphatically not a computer. Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come. But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.
eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form. But the human universe is not the whole universe. We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture. But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.
II: The Silver Age
Human beings have the peculiar capability to endow material objects with inner meaning. We know this as one of the basic characteristics of humanness. From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness. Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world. Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world. This process actually provides the mechanism by which the world comes to make sense to us. If we could not overload the material world with meaning, we could not come to know it or manipulate it.
This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself. But none of us can look at a thing and be completely innocent about its hidden meanings. They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness. For those of us not in such a blessed state, the material world has a subconscious component. Everything means something. Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific. Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it. It is always there, but rarely spoken of. That is about to change.
One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual. You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years. That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities. Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.
This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision. The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world. At present, AR is flashy, but not at all useful. It’s about to make a transition. It will no longer be spectacular, but we’ll wonder how we lived without it.
Let me illustrate the nature of this transition, drawn from examples in my own experience. These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit. Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.
Example One: The Book
Last year I read a wonderful book. The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century. By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago. That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.
Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text. When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.
As I said earlier, the book is on the edge of ephemeralization. It wants to be digitized, because it has always been a message, encoded. When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on. You’d be able to make a well-briefed decision on whether this book is the right book for you. Simple. In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.
But that’s not what a book is anymore. Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth. It’s this intension that the device has to support. As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.
This means that the book will vary, person to person. My fragments will be sewn together with my threads, yours with your threads. The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection. The more connected everything becomes, the less likely we are prone to linearity. We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.
Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext. The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning. The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.
The book stands on the threshold, between the worlds of the physical and the immaterial. As such it is pulled in both directions at once. It wants to be liberated, but will be utterly destroyed in that liberation. The next example is something far more physical, and, consequentially, far more important.
Example Two: Beef Mince
I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese. Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce. Today I’d walk up to the meat case and throw a random package into my shopping trolley. If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close. I might also check to see how much fat is in the mince. Or perhaps it’s grass-fed beef. Or organically grown. All of this information is offered up on the label placed on the package. And all of it is so carefully filtered that it means nearly nothing at all.
What I want to do is hold my device up to the package, and have it do the hard work. Go through the supermarket to the distributor, through the distributor to the abattoir, through the abattoir to farmer, through the farmer to the animal itself. Was it healthy? Where was it slaughtered? Is that abattoir healthy? (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.) Was it fed lots of antibiotics in a feedlot? Which ones?
And – perhaps most importantly – what about the carbon footprint of this little package of mince? How much CO2 was created? How much methane? How much water was consumed? These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet. Without a system like this, it is essentially impossible. With such a system it can potentially become easy. As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.
Finally, what about the caloric count of that packet of mince? And its nutritional value? I should be tracking those as well – or rather, my device should – so that I can maintain optimal health. I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium. Something should be keeping track of this. Something that can watch and record and use that recording to build a model. Something that can connect the real world of objects with the intangible set of goals that I have for myself. Something that could do that would be exceptionally desirable. It would be as seductive as the Web.
The more information we have at hand, the better the decisions we can make for ourselves. It’s an idea so simple it is completely self-evident. We won’t need to convince anyone of this, to sell them on the truth of it. They will simply ask, ‘When can I have it?’ But there’s more. My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.
Example Three: Medicine
Four months ago, I contracted adult-onset chickenpox. Which was just about as much fun as that sounds. (And yes, since you’ve asked, I did have it as a child. Go figure.) Every few days I had doctors come by to make sure that I was surviving the viral infection. While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high. He suggested that I go on Micardis, a common medication for hypertension. I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.
Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried. Medicines are never perfect; they work for a certain large cohort of people. For others they do nothing at all. For a far smaller number, they might be toxic. So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.
The doctor who came to see me was not my regular GP. He did not know my medical history. He did not know the history of the other medications I had been taking. All he knew was what he saw when he walked into my flat. That could be a recipe for disaster. Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems. This is well known. It is one of the drawbacks of modern pharmaceutical medicine.
This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive. Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption. But that’s a model which is precisely backward. While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves. I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.
Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500. Well within the range of your typical medical test. Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs. Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others. Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.
The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive. It is our interface to ourselves, and in that becomes an object of almost unimaginable importance. In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible. It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships. It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.
These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network. There are already bits and pieces of much of this in place. It is a revolution waiting to happen. That revolution will change everything about the Web, and why we use it, how, and who profits from it.
III: The Bronze Age
By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web. He’s talking about the Semantic Web.” And you’re right, I am talking about the Semantic Web. But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another. What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it. That’s the vital difference which made the Web such a success in 1994 and 1995. And it’s about to happen once again.
But we are starting from near zero. Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat. I can not. I can not Google for the contents of my home. There is no place to put that information, even if I had it, nor systems to put that information to work. It is exactly like the Web in 1993: the lights on, but nobody home. We have the capability to conceive of the world-as-a-database. We have the capability to create that database. We have systems which can put that database to work. And we have the need to overlay the real world with that rich set of data.
We have the capability, we have the systems, we have the need. But we have precious little connecting these three. These are not businesses that exist yet. We have not brought the real world into our conception of the Web. That will have to change. As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison. There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple. As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.
I can not tell you exactly what will fire off this next revolution. I doubt it will be the integration of Wikipedia with a mobile camera. It will be something much more immediate. Much more concrete. Much more useful. Perhaps something concerned with health. Or with managing your carbon footprint. Those two seem the most obvious to me. But the real revolution will probably come from a direction no one expects. It’s nearly always that way.
There no reason to think that Wellington couldn’t be the epicenter of that revolution. There was nothing special about San Francisco back in 1993 and 1994. But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web. Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?
This is where the future is entirely in your hands. You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects. Or, you can wait for someone else to come along and do it. Because someone inevitably will. Every day, the pressure grows. The real world is clamoring to crawl into cyberspace. You can open the door.
This is the era of sharing. When the histories of our time are written a hundred years from now, sharing is the salient feature which historians will focus upon. The entirety of culture, from 1999 forward, looks like a gigantic orgy of sharing.
This morning I want to take a look at this phenomenon in some detail, and tie it into some Australian educational ‘megatrends’ – forces which are altering the landscape throughout the nation. Sharing can be used as an engine to power these forces, but that will only happen if we understand how sharing works.
At some level, sharing is totally familiar to us – we’ve been sharing since we’ve been very small. But sharing, at least in the English language, has two slightly different meanings: we can share things, or we can share thoughts. We adults spend a lot of time teaching children the importance of sharing their things; we never need to teach them to share their thoughts. The sharing of things is a cultural behavior, valued by our civilization, whereas the sharing of thoughts is an innate behavior – probably located somewhere deep in our genes.
Fifteen years ago, Nicholas Negroponte characterized this as the divide between bits and atoms. We have to teach children to share their atoms – their toys and games – but they freely share their bits. In fact, they’re so promiscuous with their bits that this has produced its own range of problems.
It was only a decade ago that Shawn Fanning released a program which he’d written for his mates at Boston’s Northeastern University. Napster allowed anyone with a computer and a broadband internet connection to share their MP3 music files freely. Within a few months, millions of broadband-connected college students were freely trading their music collections with one another – without any thought of copyright or ownership. Let me reiterate: thoughts of copyright or piracy simply didn’t enter into their thinking. To them, this was all about sharing.
This act of sharing was a natural consequence of the ‘hyperconnectivity’ these kids had achieved via their broadband connections. When you connect people together, they will begin to share the things they care about. If you build a system that allows them to share the music they care about, they’ll share that. If you build a system that allows them to share the videos they care about, they’ll share that. If you build a system that allows them to share the links they care about, they’ll share them.
Clever web developers and entrepreneurs have built all of these systems, and many, many more. For the first time we can use technology to accelerate and amplify the innate human desire to share bits, and so, in a case of history repeating itself, we have amplified our social and sharing systems the way the steam engine amplified our physical power two hundred years ago.
In the earliest years of this sharing revolution, people shared the objects of culture: music, videos, jokes, links, photos, writing, and so on. Just this alone has had an enormous impact on business and culture: the recording industries, which were flying high a decade ago, have been humbled. Television networks have gotten in front of the Internet distribution of their own shows, to take the sting out of piracy. Newspapers, caught in the crossfire between a controlled system of distribution and a world where everyone distributes everything, have begun to disappear. And this is just the beginning.
In 2001, another experiment in sharing started in earnest: Wikipedia encouraged a small community of contributors to add their own entries to an ever-expanding encyclopedia. In this case contributors were asked to share their knowledge – however specific or particular – to a greater whole. Although it grew slowly in its earliest days, after about 2 years Wikipedia hit an inflection point and began to grow explosively.
Knowledge seems to have a gravitational quality; when enough of it is gathered together in one place, it attracts more knowledge. That’s certainly the story of Wikipedia, which has grown to encompass more than three million articles in English, on nearly every topic under the sun. Wikipedia is only the most successful of many efforts to produce a ‘collective intelligence’ out of the ‘wisdom of crowds’. There are many others – including one I’ll come to shortly.
One of the singular features of Wikipedia – one that we never think about even though it’s the reason we use Wikipedia – is simply this: Wikipedia makes us smarter. We can approach Wikipedia full of ignorance and leave it knowing a lot of facts. Facts need to be put into practice before they can be transformed into knowledge, but at least with Wikipedia we now have the opportunity to load up on the facts. And this is true globally: because of Wikipedia every single one of us now has the opportunity to work with the best possible facts. We can use these facts to make better decisions, decisions which will improve our lives. Wikipedia may seem innocuous, but it’s really quite profound.
How profound? If we peel away all of the technology behind Wikipedia, all of the servers and databases and broadband connections of the world’s sixth most popular website, what are we left with? Only this: an agreement to share what we know. It’s that agreement, and not the servers or databases or bandwidth which makes Wikipedia special, and it’s that agreement historians will be writing about in a hundred years. That agreement will endure – even if, for some bizarre reason, Wikipedia should cease to exist – because that agreement is one of the engines driving our culture forward.
Another example of sharing, just as relevant to educators, comes from a site which launched back in 1999 as TeacherRatings.com. Like Wikipedia, it grew slowly, and went through ownership changes, emerging finally as RateMyProfessors.com, which is owned by MTV, and which now boasts ten million ratings of one million professors, lecturers and instructors. This huge wealth of ratings came about because RateMyProfessors.com attached itself to the innate desire to share. Students want to share their experiences with their instructors, and RateMyProfessors.com gives them a forum to do just that.
Just as is the case with Wikipedia, anyone can become smarter by using RateMyProfessors.com. You can learn which instructors are good teachers, which grade easily, which will bore you to tears, and so forth. You can then put that information to work to make your life better – avoiding the professors (or schools) which have the worst teachers, taking courses from the instructors who get the highest scores.
That shared knowledge, put to work, changes the power balance within the university. For the last six hundred years, universities have been able to saddle students with lousy instructors – who might happen to be fantastic researchers – and there wasn’t much that students could do about it except grumble. Now, with RateMyProfessors.com, students can pass their hard-won knowledge down to subsequent generations of students. The university proposes, the student disposes. Worse still, the instructors receiving the highest ratings on RateMyProfessors.com have been the subjects of bidding wars, as various universities try to woo them, and add them to their faculties. All of this has given students a power they’ve never had, a power they never could have until they began to share their experiences, and translate that shared knowledge into action.
Sharing is wonderful, but sharing has consequences. We can now amplify and accelerate our sharing so that it can cross the world in a matter of moments, copied and replicated all the way. The power of the network has driven us into a new era. Sharing culture, knowledge, and power has destabilized all of our institutions. Businesses totter and collapse; universities change their practices; governments create task forces to get in front of what everyone calls ‘something-2.0’. It could be web2.0, education2.0, or government2.0. It doesn’t matter. What does matter is that something big is happening, and it’s all driven by our ability to share.
OK, so we can share. But why? How does it matter to us?
Before we can look at why sharing matters so much in this particular moment, we need to spend some time examining the three big events which will revolutionize education in Australia over the next decade. Each of them are entirely revolutionary in themselves; their confluence will result in a compressed wave of change – a concrescence – that will radically transform all educational practice.
The first of these events will affect all Australians equally. At this moment in time, Australia lives with medium-to-low-end broadband speeds, and most families have broadband connections which, because of metering, fundamentally limit their use. This is how it’s been since the widespread adoption of the Internet in the mid-1990s, and it’s nearly impossible to imagine that things could be different. The hidden lesson of the last fifteen years is that the Internet is something that needs to be rationed carefully, because there’s not enough to go around.
The Government wants us to adopt a different point of view. With the National Broadband Network (NBN), they intend to build a fibre-optic infrastructure which will deliver at least 100 megabit-per-second connections to every home, every school, and every business in Australia. Although no one has come out and said it explicitly, it’s clear that the Government wants this connection to be unmetered – the Internet will finally be freely available in Australia, as it is in most other countries.
How this will change our usage of the Internet is anyone’s guess. And this is the important point – we don’t know what will happen. We have critics of the NBN claiming that there’s no good reason for it, that Australians are already adequately served by the broadband we’ve already got, but I regularly hear stories of schools which block YouTube – not because of its potentially distracting qualities, but because they can’t handle the demand for bandwidth.
That, writ large, describes Australia in 2009. Broadband is the oxygen of the 21st century. Australia has been subjected to a slow strangulation. Once we can breathe freely, new horizons will open to us. We know this is true from history: no one really knew what we’d do with broadband once we got it. No one predicted Napster or YouTube or Skype, no one could have predicted any of them – or any of a thousand other innovations – before we had widespread access to broadband. Critics who argue there’s no need for high-speed broadband have simply failed to learn the lessons of history.
Now, before you think that I’m carrying the Government’s water, let me find fault with a few things. I believe that the Government isn’t thinking big enough – by the time the NBN is fully deployed, around 2017, a hundred megabit-per-second connection will simply be mid-range among our OECD peers. The Government should have accepted the technical challenge and gone for a gigabit network. Eventually, they will. Further, I believe the NBN will come with ‘strings attached’, specifically the filtering and regulatory regime currently being proposed by Senator Conroy’s ministry. The Government wants to provide the nation a ‘clean feed’, sanitized according to its interpretation of the law; when everyone in Australia gets their Internet service from the Commonwealth, we may have no choice in the matter.
The next event – and perhaps the most salient, in the context of this conference – is the Government’s commitment to provide a computer to every student in years 9 through 12. During the 2007 election, the Prime Minister talked about using computers for ‘math drills’ and ‘foreign language training’. The line about providing computers in the classroom was a popular one, although it is now clear that the Government’s ministers didn’t think through the profound effect of pervasive computing in the classroom.
First, it radically alters the power balance in the classroom. Most students have more facility with their computers than their teachers do. Some teachers are prepared to work from humility and accept instruction from their students. For other teachers, such an idea is anathema. The power balance could be righted somewhat with extensive professional development for the teachers – and time for that professional development – but schools have neither the budget nor the time to allow for this. Instead, the computers are being dumped into the classroom without any thought as to how they will affect pedagogy.
Second, these computers are being handed to students who may not be wholly aware of the potency of these devices. We’ve seen how a single text message, forwarded endlessly, can spark a riot on a Sydney beach, or how a party invitation, posted to Facebook, can lead to a crowd of five hundred and a battle with the police. Do teenagers really understand how to use the network to their advantage, how to reinforce their own privacy and protect themselves? Do they know how easy it is to ruin their own lives – or someone else’s – if they abuse the power of the network, that amplifier and accelerator of sharing?
Teachers aren’t the only ones who need some professional development. We need to provide a strong curriculum in ‘digital citizenship’; just as teenagers get instruction before they get a driver’s license, so they need instruction before they get to ‘spin the wheels’ of these ubiquitous educational computers.
This isn’t a problem that can be solved by filtering the networks at the schools. Students are surrounded by too many devices – mobiles as well as computers – which connect to the network and which require a degree of caution and education. This isn’t a job that the schools should be handling alone; this is an opportunity for all of the adult voices of culture – parents, caretakers, mentors, educators and administrators – to speak as one about the potentials and pitfalls of network culture.
Finally, what is the goal here? Right now the students and teachers are getting their computers. Next year the deployment will be nearly complete. What, in the end, is the point? Is it simply to give Kevin Rudd a tick on his ‘promises fulfilled’ list when he goes up for re-election? Or is this an opening to something greater? Is this simply more of the same or something new? I haven’t seen any educator anywhere present anything that looks at all like an integrated vision of what these laptops mean to students, teachers or the classroom. They’re bling: pretty, but an entirely useless accessory. I’m not saying that this is a bad initiative – indeed, I believe the Government should be lauded for its efforts. But everything, thus far, feels only like a beginning, the first meter around a very long course.
Now we come to the most profound of the three events on the educational horizon: the National Curriculum. Although the idea of a national curriculum has been mooted by several successive governments, it looks as though we’ll finally achieve a deliverable curriculum sometime in the early years of the Rudd Government. There’s a long way to go, of course – and a lot of tussling between the states and the various educational stakeholders – but the process is well underway. It’s expected that curricula in ‘English, Mathematics, the Sciences and History’ will be ready for implementation in the start of 2011, not very far away. As these are the core elements in any school curriculum, they will affect every school, every teacher, and every student in Australia.
A few weeks ago I got the opportunity to share the stage with Dr. Evan Arthur, the Group Manager of the Digital Education Group at the Commonwealth Department of Education, Employment and Workplace Relations. During a ‘fireside chat’, when I asked him a series of questions, the topic turned to the National Curriculum. At this point Dr. Arthur became rather thoughtful, and described the National Curriculum as a “greenfields”. He went on to describe the curriculum documents, when completed, as a set of ‘strings’ which could be handled almost as if they were a Christmas tree, ready to have content hung all over them. The National Curriculum means that every educator in Australia is, for the first time, working to the same set of ‘strings’.
That’s when I became aware that Dr. Arthur saw the National Curriculum as an enormous opportunity to redraw the possibilities for education. We are all being given an opportunity to start again – to throw out the old rule book and start over with another one. But in order to do this we’ll have to take everything we’ve covered already – about sharing, the National Broadband Network, the Digital Education Revolution and the National Curriculum, then blend them together. Together they produce a very potent mix, a nexus of possibilities which could fundamentally transform education in Australia.
III: At The Nexus
Our future is a future of sharing; we’ll be improving constantly, finding better and better ways to share with one another. To this I want to add something more subtle; not a change in technology – we have a lot of technology – but rather, a change of direction and intent. We could choose to see the National Curriculum as simply another mandate from the Federal government, something that will make the educational process even more formal, rigorous, and lifeless. That option is open to us – and, to many of us, that’s the only option visible. I want to suggest that there is another, wildly different path open before us, right next to this well-trodden and much more prosaic laneway. Rather than viewing the National Curriculum as a done deal, wouldn’t it be wiser if we consider it as an open invitation to participation and sharing?
After all, the National Curriculum mandates what must be taught, but says little to nothing about how it gets taught. Teachers remain free to pursue their own pedagogical ends. That said, teachers across Australia will, for the first time, be pursing the same ends. This opens up a space and a rationale for sharing that never existed before. Everyone is pulling in the same direction; wouldn’t it make sense for teachers, students, administrators and parents to share the experience?
Let’s be realistic: whether or not we seek to formalize this sharing of experience, it will happen anyway, on BoredOfStudies.org, RateMyTeachers.com, a hundred other websites, a thousand blogs, a hundred thousand Facebook profiles, and a million tweets. But if it all happens out there, informally, we miss an enormous opportunity to let sharing power our transition to into the National Curriculum. We’d be letting our greatest and most powerful asset slip through our fingers.
So let me turn this around and project us into a future where we have decided to formalize our shared experience of the National Curriculum. What might that look like? A teacher might normally prepare their curriculum and pedagogical materials at the beginning of the school term; during that preparation process they would check into a shared space, organized around the National Curriculum (this should be done formally, through an organization such as Education.AU, but could – and would – happen informally, via Google) to find out what other educators have created and shared as curriculum materials. Educators would find extensive notes, lesson plans, probably numerous recorded podcasts, links to materials on Wikipedia and other online resources, and so forth – everything that an educator might need to create an effective learning experience. Furthermore, educators would be encourage to share and connect around any particular ‘string’ in the National Curriculum. The curriculum thus becomes a focal point for organization and coordination rather than a brute mandate of performance.
Students, already well-connected, will continue to use informal channels to communicate about their lessons; the National Curriculum gives the educational sector (and perhaps some enterprising entrepreneur) an opportunity to create a space where those curriculum ‘strings’ translate into points of contact. Students working through a particular point in the curriculum would know where they are, and would know where to gather together for help and advice. The same wealth of materials available to educators would be available to students. None of this constitutes ‘peeking at the answers’, but rather is part of an integrated effort to give students every advantage while working their way through the National Curriculum. A student in Townsville might be able to gain some advantage from a podcast of a teacher in Albany, might want to collaborate on research with students from Ballarat, might ask some questions of an educator in Lismore. The student sits in the middle of an nexus of resources designed to offer them every opportunity to succeed; if the methodology of their own classroom is a poor fit to their learning style, chances are high that they’ll find someone else, somewhere else, who makes a better match.
All of this sounds a lot like an educational utopia, but all of it is within our immediate grasp. It is because we live at the confluence of a broadly sharing culture, and within a nation which is getting ubiquitous high-speed broadband, students and educators who now have pervasive access to computers, and a National Curriculum to act as an organizing principle. It is precisely because the stars are aligned so auspiciously that we can dream big dreams. This is the moment when anything is possible.
This transition could simply reinforce the last hundred years of industrial era education, where one-size-fits-all, where the student enters ‘airplane mode’ when they walk into the classroom – all devices disconnected, eyes up and straight ahead for the boredom of a fifty-minute excursion through some meaningless and disconnected body of knowledge. Where the computer simply becomes an electronic textbook for the distribution of media, rather than a portal for the exploration of the knowledge shared by others. Where the educator finds themselves increasingly bound to a curriculum which limits their freedom to find expression and meaning in their work. And all of this will happen, unless we recognize the other path that has opened before us. Unless we change direction, and set our feet on that path. Because if we keep on as we have been, we’ll simply end up with what we have today. And that would be a big mistake.
It needn’t be this way. We can take advantage of our situation, of the concrescence of opportunities opening to us. It will take some work, some time and some money. But more than anything else it requires a change of heart. We must stop thinking of the classroom as a solitary island of peace and quiet in the midst of a stormy sea, and rather think of it as a node within a network, connected and receptive. We must stop thinking of educators as valiant but solitary warriors, and transform them into a connected and receptive army. And we must recognize that this generation of students are so well connected on every front that they outpace us in every advance. They will be teaching us how to make this transition seem effortless.
Can we do this? Can we screw our courage up and take a leap into a great unknown, into an educational future which draws from our past, but is not bound to it? With parents and politicians crying out for metrics and endless assessments, we are losing the space to experiment, to play, to explore. Next year, the National Curriculum will land like a ton of bricks, even as it presents the opportunity for a Great Escape. The next twelve months will be crucial. If we can only change the way we think about what is possible, we will change what is possible. It’s a big ask. It’s the challenge of our times. Will we rise to meet it? Can we make an agreement to share what we know and what we do? That’s all it takes. So simple and so profound.
Slides for this talk are available here.
Few terms convey less meaning than “futurist.” What exactly is a futurist? What does he do? The definition, so far as I chose to apply it, is simple: a futurist looks at the present, at human behavior and human tendencies, to imagine how these trends develop. This is less science than storytelling; the development of any human endeavor is fraught with non-linear events, which yank the arrow of the progress this way and that. One can never know the future with any precision, and the farther the future recedes down the light-cone, the less distinct it becomes. We might know with high accuracy what will happen tomorrow. But five years from now, or twenty? That’s more alchemy than anthropology.
Yet, in order to play the game, futurists must make predictions. It’s what we do. So, for those few futurists who are willing to take the big risks of making short-term predictions – ranging from twelve to thirty-six months in the future – the game is particularly dangerous. Any futurist can predict what will come to pass in twenty years’ time, because no one will remember how wrong they were. But to make a prediction for the near term risks being revealed as a charlatan. Such predictions must be considered carefully, revealed hesitantly, and pronounced provisionally. Doing that will give you an out later on. Yet I have never been one to be either hesitant or provisional; I leap in where braver (and, arguably wiser) souls fear to tread. My particular brand of futurism – the “futurest” – is expansive, encompassing, and uncompromisingly revolutionary. I say this not to tout my strengths, but rather, to reveal the dangers.
In the early 1990s I predicted that VR would become the standard interface metaphor for computers by the 21st century. Did I get that right? It seems not; after all, we still use windows and mice as standard the interaction paradigm, just as we did back in 1990. Yet, if we can draw anything from the recent and somewhat surprisingly successful introduction of the Nintendo Wii, it’s that VR did arrive, is pervasive, and has become a dominant interface metaphor. Just not on the computer desktop. VR isn’t about head-mounted displays, although it might have seemed so, fifteen years ago. VR is about bringing the body into contact with the simulated world. Nintendo, with its clever, cheap, attractive and highly functional Wiimote, has done just that. They’ve done what decades of other researchers and engineers failed to do: they’ve brought us into the game. So predictions might come to pass, but rarely do they come in the form imagined. But every so often, when you step up to the plate, you connect completely, and knock one out of the park.
In early December 2005 I was invited to give a plenary presentation to the Australian conference on Interaction and Entertainment Design. This was one of the rare opportunities I get to talk on any subject I desired. Most of these academics wanted to talk about the latest trends in gaming and online communities; having been through that, and more, a decade ago, I decided to take the conversation in an entirely different direction, by focusing on that most common of our electronic peripherals, the mobile phone.
So common as to be nearly invisible, the mobile phone has become the focal point of our social existence. Yet, despite its constant presence, the mobile seemed poorly fit to the task of being our perpetual servant. It seemed stuck in an liminal position, between the wired world and the pervasive networked environment which is the global reality of the 21st century. The mobile was broken, and needed to be fixed. Hence, working with Angus Fraser, my graduate student – who, on his own, has had years of experience developing interfaces and applications for mobile phones – I wrote “The Telephone Repair Handbook”. I started off by challenging the audience to answer three questions:
- Q: Why does a mobile phone have a keypad? We never use it.
A: Because wired phones have keypads. And so we can enter text. Badly.
- Q: How many networks are our mobile phones really connected to?
A: The answer is generally at least three: GPRS/GSM, Bluetooth and IrDA.
- Q: What are our phones doing all the time they’re idle?
A: Nothing. They’re just waiting for a phone call or a text message to make their day.
These basic failures in the design of the mobile phone, I argued, arose from our fundamental misunderstanding of the function of the device. Mobile phones are not simply passive terminals, waiting to be activated. They are (or rather, should be) active communications processors, managing the minutiae of our social relationships.
Once I’d set up the straw men, and knocked them down, I described a new kind of mobile phone, designed from the outset to be a communications servant, a nexus which tracked, facilitated and recorded all of the social interactions happening through it, or, via Bluetooth, proximal to it. And, because I can code, I demonstrated the very first version of Blue States, a small Java J2ME application which allowed mobile phones to note and record the presence of other Bluetooth devices in their immediate proximity. This information, I insisted, could be come the foundation of an emergent social network. The mobile, at all times with you, or nearby, knows your social life better than you do. When exposed, and analyzed, this data becomes a powerful tool. Angus and I worked up a few user scenarios to demonstrate our point: the mobile can be so much more. All it needs is the right software. I finished by encouraging this room of researchers to re-invent the mobile phone, to make it the digital social secretary, the majordomo, and grand vizier.
A month after I gave that presentation, I left my teaching position, and began coding, full-time, on Blue States, readying it for its first deployment, at ISEA San Jose. As an art project at an art festival, it might influence the creative minds of electronic artists. Perhaps they would begin to pervert their own mobile phones, transforming them into something entirely more useful.
As it turns out, I didn’t have to wait for the artists to catch up with me. For it seems that even as I was beginning my research work, more than two years ago, and formulating my theories on the future of the mobile telephone, another group of researchers set to the same task, and came to many of the same conclusions.
Yesterday, on a stage in San Francisco, Steve Jobs, CEO of the now-renamed Apple, Inc., introduced the iPhone, Apple’s much-rumored and long-awaited convergence device. Three things must be noted as essential to the design:
- It has no keyboard.
- It is connected to wireless internet, Bluetooth and GPRS/EDGE networks simultaneously, and moves between each seamlessly.
- It has a sophisticated operating system, and is constantly executing several tasks at once. It is never truly idle.
The iPhone is a combination of an iPod and a mobile telephone, and these elements have been fused together with a fingertip-based user interface to make the device nearly as tactile and natural as any familiar object. It is a mobile phone, but it has – finally and rationally – lost its vestigial connections to the wired phone. It is not simply a wireless phone; it is a network terminal, with all that implies. That it has a true operating system – instead of the “toy” operating systems of earlier mobile phones, which are cranky, and which crash all too often – means that programmers can harness the capabilities of the device wholly, taking it into directions that its creators at Apple never intended. This is not simply an iPod, or a mobile phone, but a complete redefinition of the device. This, quite simply, is the future, as I predicted it, thirteen months ago.
Will the iPhone succeed? No one yet knows. The device is both new enough and different enough that significant changes in user behavior must follow in its wake. Like the Macintosh with its Graphical User Interface, this transformation might take a decade to become the dominant interaction paradigm. Or – given the level of hype and excitement seen in the media in the last twenty-four hours – it might be the right device, at the right time. It may be that Apple has told the world not only why the telephone must be reinvented, but has show it how it should be done. If they have, the iPhone will make the iPod look like a weak overture. Copies and clones will proliferate, skirting to the edge of every one of Apple’s two hundred iPhone patents. And people will begin to have very different expectations for their mobile phones.
While the iPhone both excites and dazzles me with its ingenuity, design and inventiveness, I am not completely satisfied with it. It is still a phone, an iPod, and an “internet communicator” rolled into one. It is not, in any true sense, wholly integrated. There is no way for my friends in San Francisco, with their iPhones, to know what my favorite songs are, or what I’m listening to at the moment, or what I’m reading on the web, or who I’m texting. It is halfway to the social device which I see as the inevitable end point. But the rest is just software. The hardware platform is there, ready and waiting, and will be disrupted by a dozen innovations that no one can yet predict. But I do predict they will happen, in the next twelve to thirty-six months.
And so it begins.
Last week, YouTube began the laborious process of removing all clips of The Daily Show with Jon Stewart at the request of VIACOM, parent to Paramount Television, which runs Comedy Central, home to The Daily Show. This is no easy task; there are probably tens of thousands of clips of The Daily Show posted to YouTube. Not all of them are tagged well, so – despite its every effort – YouTube is going to miss some of them, opening themselves up to continuing legal action from VIACOM.
It is as all of YouTube’s users feared: now that billions of dollars are at stake, YouTube is playing by the rules. The free-for-all of video clip sharing which brought YouTube to greatness is now being threatened by that very success. Because YouTube is big enough to sue – part of Google, which has a market capitalization of over 160 billion dollars – it is now subject to the same legal restrictions on distribution as all of the other major players in media distribution. In other words, YouTube’s ability to hyperdistribute content has been entirely handicapped by its new economic vulnerability. Since this hyperdistribution capability is the quintessence of YouTube, one wonders what will happen. Can YouTube survive as its assets are slowly stripped away?
Mark Cuban’s warnings have come back to haunt us; Cuban claimed that only a moron would buy YouTube, built as it is on the purloined copyrights of others. Cuban’s critique overlooked the enormous value of YouTube’s of peer-produced content, something I have noted elsewhere. Thus, this stripping of assets will not diminish the value of YouTube. Instead, it will reveal the true wealth of peer-production.
In the past week I’ve used YouTube at least five times daily – but not to watch The Daily Show. I’ve been watching a growing set of political advertisements, commentary and mashups, all leading up to the US midterm elections. YouTube has become the forum for the sharing of political videos, and, while some of them are brazenly lifted from CNN or FOX NEWS, most are produced by the campaigns, and are intended to be hyperdistributed as widely as possible. Political advertising and YouTube are a match made in heaven. When political activism crosses the line into citizen journalism (such as in the disturbing clips of people being roughed up by partisan thugs) that too is hyperdistributed via YouTube. Anything that’s captured on a video camera, or television tuner, or mobile telephone can (and frequently does) end up on YouTube in a matter of minutes.
Even as VIACOM executed their draconian copyrights, the folly of their old-school thinking became ever more apparent. Oprah featured a segment on Juan Mann, Sick Puppies and their now-entirely-overexposed video. It’s been up on YouTube for five weeks, has now topped five million views, and four major record labels are battling for the chance to sign Sick Puppies to a recording contract. It reveals the fundamental paradox of hyperdistribution: the more something is shared, the more valuable it becomes. Take The Daily Show off of YouTube, and fewer people will see it. Fewer people will want to catch the broadcast. Ratings will drop off. And you run the risk of someone else – Ze Frank, perhaps, or another talented upstart – filling the gap.
Yes, Comedy Central is offering The Daily Show on their website, for those who can remember to go there, can navigate through the pages to find the show they want, can hope they have the right video software installed, etc. But Comedy Central isn’t YouTube. It isn’t delivering half of the video seen on the internet. YouTube has become synonymous with video the way Google has become synonymous with search. Comedy Central ignores this fact at its peril, because it’s relying on a change in audience behavior.
Television producers are about to learn the same lessons that film studios and the recording industry learned before them: what the audience wants, it gets. Take your clips off of YouTube, and watch as someone else – quite illegally – creates another hyperdistribution system for them. Attack that system, and watch as it fades into invisibility. Those attacks will force it to evolve into ever-more-undetectable forms. That’s the lesson of music-sharing site Napster, and the lesson of torrent-sharing site Supernova. When you attack the hyperdistribution system, you always make the problem worse.
In its rude, thuggish way, VIACOM is asserting the primacy of broadcasting over hypercasting. VIACOM built an empire from television broadcasting, and makes enormous revenues from it. They’re unlikely to do anything that would encourage the audience toward a new form of distribution. At the same time, they’re powerless to stop that audience from embracing hyperdistribution. So now we get to see the great, unspoken truth of television broadcasting – it’s nothing special. Buy a chunk of radio spectrum, or a satellite transponder, or a cable provider: none of it gives you any inherent advantage in reaching the audience. Ten years ago, they were a lock; today, they’re only an opportunity. There are too many alternate paths to the audience – and the audience has too many paths to one another.
This doesn’t mean that broadcasting will collapse – at least not immediately. It does mean that – finally – there’s real competition. The five media megacorporations in the United States now have several hundred thousand motivated competitors. Only a few of these will reach the “gold standard” of high-quality production technique which characterizes broadcast media. The audience doesn’t care. The audience prizes immediacy, relevancy, accessibility, and above all, salience. There’s no way that five companies, however rich and productive, can satisfy the needs of an audience which has come to expect that it can get exactly what it wants, when it wants, wherever it wants. Furthermore, there’s no way to stop anything that gets broadcast by those companies from being hyperdistributed and added to the millions of available choices. You’d need to lock down every PC, every broadband connection, and every television in the world to maintain a level of control which, just a few years ago, came effortlessly.
VIACOM may sense the truth of this, even as they act against this knowledge. Rumors have been swirling around the net, indicating that YouTube and VIACOM have come to a deal, and that the clips will not be removed – this, while they’re still being deleted. VIACOM, caught in the inflection point between broadcasting and hypercasting, doesn’t fully understand where its future interests lie. In the meantime, it thrashes about as its lizard-brained lawyers revert to the reflexive habits of cease-and-desist.
This week, after two years of frustration and failure, I managed to install and configure MythTV. MythTV is a LINUX-based digital video recorder (DVR) which has been in development for over four years. It has matured enormously in that time, but it still took every last one of my technical skills – plus a whole lot of newly-acquired ones – to get it properly set up. Even now, after some four days of configuration, I’m not quite finished. That puts MythTV miles out of the range of the average viewer, who just wants a box they can drop into their system, turn on, and play with. Those folks purchase a TiVo. But TiVo doesn’t work in Australia – at least, not without the same level of technical gymnastics required to install MythTV. If I had digital cable – spectacularly uncommon in Australia – I could use Foxtel iQ, a very polished DVR with multiple tuners, full program guide, etc. But I have all of that, right now, running on my PC, with MythTV.
I’ve never owned a DVR, though I have written about them extensively. The essential fact of the DVR is that it coaxes you away from television as a live medium. That’s an important point in Australia, where most of us have just five broadcast channels to pick from: frequently, there’s nothing worth watching. But, once you’ve set up the appropriate recording schedule on your DVR, the device is always filled with programming you want to watch. People with DVRs tend to watch 30% more television than those without, and they tend to enjoy it more, because they’re getting just the programmes they find most salient.
Last night – the first night of a relatively complete MythTV configuration – I went to attend a friend’s lecture, but left MythTV to record the evening’s news programmes. I came back in, and played the recorded programmes, but took full advantage of the DVRs ability to jump through the content. I skipped news stories I’d seen earlier in the day (plus all of the sport reportage), and reviewed the segments I found most interesting. I watched 2 hours of television in about 45 minutes, and felt immensely satisfied at the end, because, for the first time, I could completely command the television broadcast, shaping it to the demands of salience. This is the way TV should be watched, I realized, and I knew there’d be no going back.
My DVR has a lot in common with YouTube. Both systems skirt the law; in my case the programming schedules which I download from a community-hosted site are arguably illegal under Australian copyright law, and recording a program at all – either in the US or in Australia – is also illegal. (You don’t sue your audience, and you don’t waste your money suing a not-for-profit community site.) Both systems give me immediate access to content with enormous salience; I see just what I want, just when I want to. YouTube is home to peer-produced content, while the DVR houses professional productions, works that meet the “gold standard”. I have already begun to conceive of them as two halves of the same video experience.
It won’t be long before some enterprising hacker integrates the two meaningfully: perhaps a YouTube plugin for MythTV? (MythTV is a free and open source application, available for anyone to modify or improve.) Perhaps it will be some deal struck between the broadcasters and YouTube. Or perhaps both will occur. This would represent the kind of “convergence” much talked about in the late 1990s, and all but abandoned. Convergence has come; from my point of view it doesn’t matter whether I use MythTV or YouTube or their hybrid offspring. All I care about is watching the programmes that interest me. How they get delivered is nothing special.