What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Dense and Thick

I: The Golden Age

In October of 1993 I bought myself a used SPARCstation.  I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration.  I also had some ideas about coding networking protocols for shared virtual worlds.  Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet.  Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference.  I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there.  Not enough content to make it really interesting.  The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968.  Without sufficient content, hypertext systems are fundamentally uninteresting.  Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage.  To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive.  Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party.  The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing.  Over the course of the last week of October 1993, I visited every single one of those Websites.  Then I was done.  I had surfed the entire World Wide Web.  I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days.  Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites.  Yet even so, the Web had the capacity to suck you in.  I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly.  This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it.  At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party.  This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco.  People would come to perform, create, demonstrate, and spectate.  I decided I would show these people this new-fangled thing I’d become obsessed with.  So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?”  They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject.  They’d be delighted, and begin to explore.  At no point did I say, “This is the World Wide Web.”  Nor did I use the word ‘hypertext’.  I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution.  Was I responsible for that?  I’d like to think so, but I reckon San Francisco was a bit of a nexus.  I wasn’t the only one exploring the Web.  That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm?  How about you type in ‘www.hotwired.com’?”  Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online.  Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything.  I didn’t have to tell Steuer, and he didn’t have to tell me.  We knew.  And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire.  It was, and is, all things to all people.  This makes it the perfect love machine – nothing can confirm your prejudices better than the Web.  It also makes the Web a very pretty hate machine.  It is the reflector and amplifier of all things human.  We were completely unprepared, and for that reason the Web has utterly overwhelmed us.  There is no going back.  If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web.  Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide.  The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection.  The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference.  We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible.  But this age is drawing to a close.  Two recent developments will, in retrospect, be seen as the beginning of the end.  The first of these is the transformation of the oldest medium into the newest.  The book is coextensive with history, with the largest part of what we regard as human culture.  Until five hundred and fifty years ago, books were handwritten, rare and precious.  Moveable type made books a mass medium, and lit the spark of modernity.  But the book, unlike nearly every other medium, has resisted its own digitization.  This year the defenses of the book have been breached, and ones and zeroes are rushing in.  Over the next decade perhaps half or more of all books will ephemeralize,  disappearing into the ether, never to return to physical form.  That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate.  Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not.  The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us.  It presents the Web as a surface, nothing more.  iPad is a portal into the human universe, stripped of everything that is a computer.  It is emphatically not a computer.  Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come.  But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form.  But the human universe is not the whole universe.  We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture.  But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning.  We know this as one of the basic characteristics of humanness.  From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness.  Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world.  Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world.  This process actually provides the mechanism by which the world comes to make sense to us.  If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself.  But none of us can look at a thing and be completely innocent about its hidden meanings.  They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness.  For those of us not in such a blessed state, the material world has a subconscious component.  Everything means something.  Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific.  Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it.  It is always there, but rarely spoken of.  That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual.  You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years.  That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities.  Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision.  The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world.  At present, AR is flashy, but not at all useful.  It’s about to make a transition.  It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience.  These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit.  Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One:  The Book

Last year I read a wonderful book.  The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century.  By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago.  That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text.  When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization.  It wants to be digitized, because it has always been a message, encoded.  When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on.  You’d be able to make a well-briefed decision on whether this book is the right book for you.  Simple.  In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore.  Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth.  It’s this intension that the device has to support.  As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person.  My fragments will be sewn together with my threads, yours with your threads.  The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection.  The more connected everything becomes, the less likely we are prone to linearity.  We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext.  The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning.  The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial.  As such it is pulled in both directions at once.  It wants to be liberated, but will be utterly destroyed in that liberation.  The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese.  Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce.  Today I’d walk up to the meat case and throw a random package into my shopping trolley.  If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close.  I might also check to see how much fat is in the mince.  Or perhaps it’s grass-fed beef.  Or organically grown.  All of this information is offered up on the label placed on the package.  And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work.  Go through the supermarket to the distributor, through the distributor to the abattoir,  through the abattoir to farmer, through the farmer to the animal itself.  Was it healthy?  Where was it slaughtered?  Is that abattoir healthy?  (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.)  Was it fed lots of antibiotics in a feedlot?  Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince?  How much CO2 was created?  How much methane?  How much water was consumed?  These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet.  Without a system like this, it is essentially impossible.  With such a system it can potentially become easy.  As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince?  And its nutritional value?  I should be tracking those as well – or rather, my device should – so that I can maintain optimal health.  I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium.  Something should be keeping track of this.  Something that can watch and record and use that recording to build a model.  Something that can connect the real world of objects with the intangible set of goals that I have for myself.  Something that could do that would be exceptionally desirable.  It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves.  It’s an idea so simple it is completely self-evident.  We won’t need to convince anyone of this, to sell them on the truth of it.  They will simply ask, ‘When can I have it?’  But there’s more.  My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three:  Medicine

Four months ago, I contracted adult-onset chickenpox.  Which was just about as much fun as that sounds.  (And yes, since you’ve asked, I did have it as a child.  Go figure.)  Every few days I had doctors come by to make sure that I was surviving the viral infection.  While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high.  He suggested that I go on Micardis, a common medication for hypertension.  I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried.  Medicines are never perfect; they work for a certain large cohort of people.  For others they do nothing at all.  For a far smaller number, they might be toxic.  So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP.  He did not know my medical history.  He did not know the history of the other medications I had been taking.  All he knew was what he saw when he walked into my flat.  That could be a recipe for disaster.  Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems.  This is well known.  It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive.  Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption.  But that’s a model which is precisely backward.  While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves.  I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500.  Well within the range of your typical medical test.  Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs.  Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others.  Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive.  It is our interface to ourselves, and in that becomes an object of almost unimaginable importance.  In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible.  It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships.  It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network.  There are already bits and pieces of much of this in place.  It is a revolution waiting to happen.  That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III:  The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web.  He’s talking about the Semantic Web.”  And you’re right, I am talking about the Semantic Web.  But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another.  What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it.  That’s the vital difference which made the Web such a success in 1994 and 1995.  And it’s about to happen once again.

But we are starting from near zero.  Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat.  I can not.  I can not Google for the contents of my home.  There is no place to put that information, even if I had it, nor systems to put that information to work.  It is exactly like the Web in 1993: the lights on, but nobody home.  We have the capability to conceive of the world-as-a-database.  We have the capability to create that database.  We have systems which can put that database to work.  And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need.  But we have precious little connecting these three.  These are not businesses that exist yet.  We have not brought the real world into our conception of the Web.  That will have to change.  As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison.  There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple.  As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution.  I doubt it will be the integration of Wikipedia with a mobile camera.  It will be something much more immediate.  Much more concrete.  Much more useful.  Perhaps something concerned with health.  Or with managing your carbon footprint.  Those two seem the most obvious to me.  But the real revolution will probably come from a direction no one expects.  It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution.  There was nothing special about San Francisco back in 1993 and 1994.  But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web.  Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands.  You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects.  Or, you can wait for someone else to come along and do it.  Because someone inevitably will.  Every day, the pressure grows.  The real world is clamoring to crawl into cyberspace.  You can open the door.

The Power of Sharing

The Power of Sharing from Mark Pesce on Vimeo.

Inaugural address for the “What’s the Big Idea?” lecture series, at the Bundeena Bowls Club in Bundeena, a small community (pop. 3500) just south of Sydney in Royal National Park.

Synopsis: Sharing :: Hyperconnectivity

The Day TV Died

On the 18th of October in 2004, a UK cable channel, SkyOne, broadcast the premiere episode of Battlestar Galactica, writer-producer Ron Moore’s inspired revisioning of the decidedly campy 70s television series. SkyOne broadcast the episode as soon as it came off the production line, but its US production partner, the SciFi Channel, decided to hold off until January – a slow month for television – before airing the episodes. The audience for Battlestar Galactica, young and technically adept, made digital recordings of the broadcasts as they went to air, cut out the commercials breaks, then posted them to the Internet.

For an hour-long television programme, a lot of data needs to be dragged across the Internet, enough to clog up even the fastest connection. But these young science fiction fans used a new tool, BitTorrent, to speed the bits on their way. BitTorrent allows a large number of computers (in this case, over 10,000 computers were involved) to share the heavy lifting. Each of the computers downloaded pieces of Battlestar Galactica, and as each got a piece, they offered it up to any other computer which wanted a copy of that piece. Like a forest of hands each trading puzzle pieces, each computer quickly assembled a complete copy of the show.

All of this happened within a few hours of Battlestar Galactica going to air. That same evening, on the other side of the Atlantic, American fans watched the very same episode that their fellow fans in the UK had just viewed. They liked what they saw, and told their friends, who also downloaded the episode, using BitTorrent. Within just a few days, perhaps a hundred thousand Americans had watched the show.

US cable networks regularly count their audience in hundreds of thousands. A million would be considered incredibly good. Executives for SciFi Channel ran the numbers and assumed that the audience for this new and very expensive TV series had been seriously undercut by this international trafficking in television. They couldn’t have been more wrong. When Battlestar Galactica finally aired, it garnered the biggest audiences SciFi Channel had ever seen – well over 3 million viewers.

How did this happen? Word of mouth. The people who had the chops to download Battlestar Galactica liked what they saw, and told their friends, most of whom were content to wait for SciFi Channel to broadcast the series. The boost given the series by its core constituency of fans helped it over the threshold from cult classic into a genuine cultural phenomenon. Battlestar Galactica has become one of the most widely-viewed cable TV series in history; critics regularly lavish praise on it, and yes, fans still download it, all over the world.

Although it might seem counterintuitive, the widespread “piracy” of Battlestar Galactica was instrumental to its ratings success. This isn’t the only example. BBC’s Dr. Who, leaked to BitTorrent by a (quickly fired) Canadian editor, drummed up another huge audience. It seems, in fact, that “piracy” is good. Why? We live in an age of fantastic media oversupply: there are always too many choices of things to watch, or listen to, or play with. But, if one of our friends recommends something, something they loved enough to spend the time and effort downloading, that carries a lot of weight.

All of this sharing of media means that the media titans – the corporations which produce and broadcast most of the television we watch – have lost control over their own content. Anything broadcast anywhere, even just once, becomes available everywhere, almost instantaneously. While that’s a revolutionary development, it’s merely the tip of the iceberg. The audience now has the ability to share anything they like – whether produced by a media behemoth, or made by themselves. YouTube has allowed individuals (some talented, some less so) reach audiences numbering in hundreds of millions. The attention of the audience, increasingly focused on what the audience makes for itself, has been draining ratings away from broadcasters, a drain which accelerates every time someone posts something funny, or poignant, or instructive to YouTube.

The mass media hasn’t collapsed, but it has been hollowed out. The audience occasionally tunes in – especially to watch something newsworthy, in real-time – but they’ve moved on. It’s all about what we’re saying directly to one another. The individual – every individual – has become a broadcaster in his or her own right. The mechanics of this person-to-person sharing, and the architecture of these “New Networks”, are driven by the oldest instincts of humankind.

The New Networks

Human beings are social animals. Long before we became human – or even recognizably close – we became social. For at least 11 million years, before our ancestors broke off from the gorillas and chimpanzees, we cultivated social characteristics. In social groups, these distant forbears could share the tasks of survival: finding food, raising young, and self-defense. Human babies, in particular, take many years to mature, requiring constantly attentive parenting – time stolen away from other vital activities. Living in social groups helped ensure that these defenseless members of the group grew to adulthood. The adults who best expressed social qualities bore more and healthier children. The day-to-day pressures of survival on the African savannahs drove us to be ever more adept with our social skills.

We learned to communicate with gestures, then (no one knows just how long ago) we learned to speak. Each step forward in communication reinforced our social relationships; each moment of conversation reaffirms our commitment to one another, every spoken word an unspoken promise to support, defend and extend the group. As we communicate, whether in gestures or in words, we build models of one another’s behavior. (This is why we can judge a friend’s reaction to some bit of news, or a joke, long before it comes out of our mouths.) We have always walked around with our heads full of other people, a tidy little “social network,” the first and original human network. We can hold about 150 other people in our heads (chimpanzees can manage about 30, gorillas about 15, but we’ve got extra brains they don’t to help us with that), so, for 90% of human history, we lived in tribes of no more than about 150 individuals, each of us in constant contact, a consistent communication building and reinforcing bonds which would make us the most successful animals on Earth. We learned from one another, and shared whatever we learned; a continuity of knowledge passed down seamlessly, generation upon generation, a chain of transmission that still survives within the world’s indigenous communities. Social networks are the gentle strings which connect us to our origins.

This is the old network. But it’s also the new network. A few years ago, researcher Mizuko Ito studied teenagers in Japan, to find that these kids – all of whom owned mobile telephones – sent as many as a few hundred text messages, every single day, to the same small circle of friends. These messages could be intensely meaningful (the trials and tribulations of adolescent relationships), or just pure silliness; the content mattered much less than that constant reminder and reinforcement of the relationship. This “co-presence,” as she named it, represents the modern version of an incredibly ancient human behavior, a behavior that had been unshackled by technology, to span vast distances. These teens could send a message next door, or halfway across the country. Distance mattered not: the connection was all.

In 2001, when Ito published her work, many dismissed her findings as a by-product of those “wacky Japanese” and their technophile lust for new toys. But now, teenagers everywhere in the developed world do the same thing, sending tens to hundreds of text messages a day. When they run out of money to send texts (which they do, unless they have very wealthy parents), they simply move online, using instant messaging and MySpace and other techniques to continue the never-ending conversation.

We adults do it too, though we don’t recognize it. Most of us who live some of our lives online, receive a daily dose of email: we flush the spam, answer the requests and queries of our co-workers, deal with any family complaints. What’s left over, from our friends, more and more consists of nothing other than a link to something – a video, a website, a joke – somewhere on the Internet. This new behavior, actually as old as we are, dates from the time when sharing information ensured our survival. Each time we find something that piques our interest, we immediately think, “hmm, I bet so-and-so would really like this.” That’s the social network in our heads, grinding away, filtering our experience against our sense of our friends’ interests. We then hit the “forward” button, sending the tidbit along, reinforcing that relationship, reminding them that we’re still here – and still care. These “Three Fs” – find, filter and forward – have become the cornerstone of our new networks, information flowing freely from person-to-person, in weird and unpredictable ways, unbounded by geography or simultaneity (a friend can read an email weeks after you send it), but always according to long-established human behaviors.

One thing is different about the new networks: we are no longer bounded by the number of individuals we can hold in our heads. Although we’ll never know more than 150 people well enough for them to take up some space between our ears (unless we grow huge, Spock-like minds) our new tools allow us to reach out and connect with casual acquaintances, or even people we don’t know. Our connectivity has grown into “hyperconnectivity”, and a single individual, with the right message, at the right time, can reach millions, almost instantaneously.

This simple, sudden, subtle change in culture has changed everything.

The Nuclear Option

On the 12th of May in 2008, a severe earthquake shook a vast area of southeast Asia, centered in the Chinese state of Sichuan. Once the shaking stopped – in some places, it lasted as long as three minutes – people got up (when they could, as may lay under collapsed buildings), dusted themselves off, and surveyed the damage. Those who still had power turned to their computers to find out what had happened, and share what had happened to them. Some of these people used so-called “social messaging services”, which allowed them to share a short message – similar to a text message – with hundreds or thousands of acquaintances in their hyperconnected social networks.

Within a few minutes, people on every corner of the planet knew about the earthquake – well in advance of any reports from Associated Press, the BBC, or CNN. This network of individuals, sharing information each other through their densely hyperconnected networks, spread the news faster, more effectively, and more comprehensively than any global broadcaster.

This had happened before. On 7 July 2005, the first pictures of the wreckage caused by bombs detonated within London’s subway system found their way onto Flickr, an Internet photo-sharing service, long before being broadcast by BBC. A survivor, waking past one of the destroyed subway cars, took snaps from her mobile and sent them directly on to Flickr, where everyone on the planet could have a peek. One person can reach everyone else, if what they have to say (or show) merits such attention, because that message, even if seen by only one other person, will be forwarded on and on, through our hyperconnected networks, until it has been received by everyone for whom that message has salience. Just a few years ago, it might have taken hours (or even days) for a message to traverse the Human Network. Now it happens a few seconds.

Most messages don’t have a global reach, nor do they need one. It is enough that messages reach interested parties, transmitted via the Human Network, because just that alone has rewritten the rules of culture. An intemperate CEO screams at a consultant, who shares the story through his network: suddenly, no one wants to work for the CEO’s firm. A well-connected blogger gripes about problems with his cable TV provider, a story forwarded along until – just a half-hour later – he receives a call from a vice-president of that company, contrite with apologies and promises of an immediate repair. An American college student, arrested in Egypt for snapping some photos in the wrong place at the wrong time, text messages a single word – “ARRESTED” – to his social network, and 24 hours later, finds himself free, escorted from jail by a lawyer and the American consul, because his network forwarded this news along to those who could do something about his imprisonment.

Each of us, thoroughly hyperconnected, brings the eyes and ears of all of humanity with us, wherever we go. Nothing is hidden anymore, no secret safe. We each possess a ‘nuclear option’ – the capability to go wide, instantaneously, bringing the hyperconnected attention of the Human Network to a single point. This dramatically empowers each of us, a situation we are not at all prepared for. A single text message, forwarded perhaps a million times, organized the population of Xiamen, a coastal city in southern China, against a proposed chemical plant – despite the best efforts of the Chinese government to sensor the message as it passed through the state-run mobile telephone network. Another message, forwarded around a community of white supremacists in Sydney’s southern suburbs, led directly to the Cronulla Riots, two days of rampage and attacks against Sydney’s Lebanese community, in December 2005.

When we watch or read stories about the technologies of sharing, they almost always center on recording companies and film studios crying poverty, of billions of dollars lost to ‘piracy’. That’s a sideshow, a distraction. The media companies have been hurt by the Human Network, but that’s only a minor a side-effect of the huge cultural transformation underway. As we plug into the Human Network, and begin to share that which is important to us with others who will deem it significant, as we learn to “find the others”, reinforcing the bonds to those others every time we forward something to them, we dissolve the monolithic ties of mass media and mass culture. Broadcasters, who spoke to millions, are replaced by the Human Network: each of us, networks in our own right, conversing with a few hundred well-chosen others. The cultural consensus, driven by the mass media, which bound 20th-century nations together in a collective vision, collapses into a Babel-like configuration of social networks which know no cultural or political boundaries.

The bomb has already dropped. The nuclear option has been exercised. The Human Network brought us together, and broke us apart. But in these fragments and shards of culture we find an immense vitality, the protean shape of the civilization rising to replace the world we have always known. It all hinges on the transition from sharing to knowing.

That Business Conversation

Case One: Lists

I moved to San Francisco in 1991, because I wanted to work in the brand-new field of virtual reality, and San Francisco was the epicenter of all commercial development in VR. The VR community came together for meetings of the Virtual Reality Special Interest Group at San Francisco’s Exploratorium, the world-famous science museum. These meetings included public demonstrations of the latest VR technology, interviews with thought-leaders in the field, and plenty of opportunity for networking. At one of the first of those meetings I met a man who impressed me by his sheer ordinariness. He was an accountant, and although he was enthusiastic about the possibilities of VR, he wasn’t working in the field – he was simply interested in it. Still, Craig Newmark was pleasant enough, and we’d always engage in a few lines of conversation at every meeting, although I can’t remember any of these conversations very distinctly.

Newmark met a lot of people – he was an excellent networker – and fairly quickly built up a nice list of email addresses for his contacts, whom he kept in contact with through a mailing list. This list, known as “Craig’s List”, because a de facto bulletin board for the core web and VR communities in San Francisco. People would share information about events in town, or observations, or – more frequently – they’d offer up something for sale, like a used car or a futon or an old telly.

As more people in San Francisco were sucked into the growing set of businesses which were making money from the Web, they too started reading Craig’s List, and started contributing to it. By the middle of 1995, there was too much content to be handled neatly in a mailing list, so Newmark – who, like nearly everyone else in the San Francisco Web community, had some basic web authoring skills – created a very simple web site which allowed people to post their own listings to the Web site. Newmark offered this service freely – his way of saying “thank you” to the community, and, equally important, his way of reinforcing all of the social relationships he’d built up in the last few years.

Newmark’s timing was excellent; Craigslist came online just as many, many people in San Francisco were going onto the Web, and Craigslist quickly became the community bulletin board for the city. Within a few months you could find a flat for rent, a car to drive, or a date – all in separate categories, neatly organized in the rather-ugly Web layout that characterized nearly all first-generation websites. If you had a car to sell, a flat to sublet, or you wanted a date – you went to Craigslist first. Word of mouth spread the site around, but what kept it going was the high quality of the transactions people had through the site. If you sold your bicycle through Craigslist, you’d be more likely to look there first if you wanted to buy a moped. Each successful transaction guaranteed more transactions, and more success, and so on, in a “virtuous cycle” which quickly spread beyond San Francisco to New York, Los Angeles, Seattle, and other well-connected American cities.

From the very beginning, everything on Craigslist was freely available – it nothing to list an item or to view listings. The only thing Newmark ever charged for was job listings – one of the most active areas on Craigslist, particularly in the heyday of the Web bubble. Jobs listings alone paid for all of the rest of the operational costs of Craigslist – and left Newmark with a healthy profit, which he reinvested into the business, adding capacity and expanding to other cities across America. Within a few years, Newmark had a staff of nine people, all working out of a house in San Francisco’s Sunset District – which, despite its name, is nearly always foggy.

While I knew about Craigslist – it was hard not to – I didn’t use it myself until 2000, when I left my professorial housing at the University of Southern California. I was looking for a little house in the Hollywood Hills – a beautiful forested area in the middle of the city. I went onto Craigslist and soon found a handful of listings for house rentals in the Hollywood Hills, made some calls and – within about 4 hours – had found the house of my dreams, a cute little Swiss cottage that looked as though it fell out of the pages of “Heidi”. I moved in at the beginning of June 2000, and stayed there until I moved to Sydney in 2003. It was perhaps the nicest place I’d ever lived, and I found it – quickly and efficiently – on Craigslist. My landlord swore by Craigslist; he had a number of properties, scattered throughout the Hollywood Hills, and always used Craigslist to rent his properties.

In late 2003, when I first came to Australia on a consulting contract – and before I moved here permanently – I used Craigslist again, to find people interested in sub-letting my flat while I worked in Sydney. Within a few days, I had the couple who’d created Dora the Explorer – a very popular children’s television show – living in my house, while they pursued a film deal with a major studio. When I came back to Los Angeles to settle my affairs, I sold my refrigerator on Craigslist, and hired a fellow to move the landlord’s refrigerator back into my flat – on Craigslist.

In most of the United States, Craigslist is the first stop for people interested in some sort of commercial transaction. It is now the 65th busiest website in the world, the 10th busiest in the United States – putting it up there with Yahoo!, Google, YouTube, MSN and eBay – and has about nine billion page views a month. None of the pages have advertising, nor are there any charges, except for job listings (and real estate listings in New York to keep unscrupulous realtors from flooding Craigslist with duplicate postings). Although it is still privately owned, and profits are kept secret, it’s estimated that Craigslist earns as much as USD $150 million from its job listings – while, with a staff of just 24 people, it costs perhaps a few million a year to keep the whole thing up and running. Quite a success story.

But everything has a downside. Craigslist has had an extraordinary effect on the entire publishing industry in North America. Newspapers, which funded their expensive editorial operations from the “rivers of gold” – car advertisements, job listings and classified ads – have found themselves completely “hollowed out” by Craigslist. Although the migration away from print to Craigslist began slowly, it has accelerated in the last few years, to the point where most people, in most circumstances will prefer to place a free listing in Craigslist than a paid listing in a newspaper. The listing will reach more people, and will cost them nothing to do so. That is an unbeatable economic proposition – unless you’re a newspaper.

It’s estimated that upwards of one billion dollars a year in advertising revenue is being lost to the newspapers because of Craigslist. This money isn’t flowing into Craig Newmark’s pocket – or rather, only a small amount of it is. Instead, because the marginal cost of posting an ad to Craigslist is effectively zero, Newmark is simply using the disruptive quality of pervasive network access to completely undercut the newspapers, while, at the same time, providing a better experience for his customers. This is an unbeatable economic proposition, one which is making Newmark a very rich man, even while it drives the Los Angeles Times ever closer to bankruptcy.

This is not Newmark’s fault, even if it is his doing. Newmark had the virtue of being in the right place (San Francisco) at the right time (1995) with the right idea (a community bulletin board). Everything that happened after that was driven entirely by the community of Craigslist’s users. This is not to say that Newmark isn’t incredible responsive to the needs of the Craigslist community – he is, and that responsiveness has served him well as Craigslist has grown and grown. But if Newmark hadn’t thought up this great idea, someone else would have. Nothing about Craigslist is even remotely difficult to create. A fairly ordinary web designer would be able to duplicate Craigslist’s features and functionality in less than a week’s worth of work. (But why bother? It already exists.) Newmark was servicing a need that no one even knew existed until after it had been created. Today, it seems perfectly obvious.

In a pervasively networked world, communities are fully empowered to create the resources they need to manage their lives. This act of creation happens completely outside of the existing systems of commerce (and copyright) that have formed the bulwarks of industrial age commerce. If an entire business sector gets crushed out of existence as a result, it’s barely even noticed by the community. This incredible empowerment – which I term “hyperempowerment” – is going to be one of the dominant features of public life in the 21st century. We have, as individuals and as communities, been gifted with incredible new powers – really, almost mutant ‘super powers’. We use them to achieve our own ends, without recognizing that we’ve just laid a city to waste.

Craigslist has not taken off in Australia. There are Craigslist sites for the “five capital cities” of Australia, but they’re only very infrequently visited. And, because they are only infrequently visited, they haven’t been able to build up enough content or user loyalty to create the virtuous cycle which has made Craigslist such a success in the United States. Why is this? It could be that the Trading Post has already got such a hold on the mindset of Australians that it’s the first place they think to place a listing. The Trading Post’s fees are low (fifty cents for a single non-car item), and it’s widely recognized, reaches a large community, etc. So that may be one reason.

Still, organizations like Fairfax and NEWS are scared to death of Craigslist. Back in 2004, Fairfax Digital launched Cracker.com.au, which provides free listings for everything except cars and jobs, which point back into the various paid advertising Fairfax websites. Australian newspaper publishers have already consigned classified advertising to the dustbin of history; they’re just waiting for the axe to fall. When it does, the Trading Post – among the most valuable of Testra/Sensis properties – will be almost entirely worthless. Telstra’s stockholders will scream, but the Australian public at large won’t care – they’ll be better served by a freely available resource which they’ve created and which they use to improve their business relations within Australia.

Case Two: Listings

In order to preserve business confidentiality, I won’t mention the name of my first Australian client, but they’re a well-known firm, publishers of traveler’s guides. The travel business, when I came to it in early 2006, was nearly unchanged from its form of the last fifty years: you send a writer to a far-away place, where they experience the delights and horrors of life, returning home to put it all into a manuscript which is edited, fact-checked, copy-edited, typeset, published and distributed. Book publishing is a famously human-intensive process – it takes an average of eighteen months for a book from a mainstream publisher to reach the marketplace, because each of these steps take time, effort and a lot of dollars. Nevertheless, a travel guide might need to be updated only twice a decade, and with global distribution it has always been fairly easy to recover the investment.

When I first met with my client, they wanted to know what might figure into the future of publishing. It turns out they knew the answer better than I did: they quickly pointed me to a new website, TripAdvisor.com. Although it is a for-profit website – earning money from bookings made through it – the various reviews and travel information provided on TripAdvisor.com are “user generated content,” that is, provided by folks who use TripAdvisor.com. Thus, a listing for a particular hotel will contain many reviews from people who have actually stayed at the hotel, each of whom have their own peccadilloes, needs, and interests. Reading through a handful of the reviews for any given hotel will give you a fairly rounded idea of what the establishment is really like.

This model of content creation and distribution is the exact opposite of the time-honored model practiced by travel publishers. Instead of an authoritative reviewer, the reviewing task is “crowdsourced” – literally given over to the community of users – to handle. The theory is that with enough reviews, some cogent body of opinion would emerge. While this seems fanciful on the face of it, it’s been proven time and again that this is an entirely successful model of knowledge production. Wikipedia, for example, has built an entire and entirely authoritative encyclopedia from user contributions – a body of knowledge far larger and at least as accurate as its nearest competitor, Encyclopaedia Britannica.

It’s still common for businesses to distrust user generated content. Movie studios nicknamed it “loser generated content”, even as their audiences turn from the latest bloated blockbuster toward YouTube. Britannica pooh-poohed Wikipedia , until an article in Nature, that bastion of scientific reporting, indicated that, on average, a Wikipedia article was nearly as accurate as a given article in Britannica. (This report came out in December 2005. Today, it’s likely an article in Wikipedia would be more accurate than an article in Britannica.) In short, businesses reject the “wisdom of crowds” at their peril.

We’ve only just discovered that a well-networked body politics has access to deep reservoirs of very specific knowledge; in some peculiar way, we are all boffins. We might be science boffins, or knitting boffins, or gearheads or simply know everything that’s ever been said about Stoner Rock. It doesn’t matter. We all have passions, and now that we have a way of sharing these passions with the world-at-large, this “collective intelligence” far outclasses the particulars of any professional organization seeking to serve up little slices of knowledge. This is a general challenge confronting all businesses and institutions in the 21st century. It’s quite commonplace today for a patient to walk into a doctor’s surgery knowing more about the specifics of an illness than the doctor does; this “Wikimedicine” is disparaged by medical professionals – but the truth is that an energized and well-networked community generally does serve its members better than any particular professional elite.

So what to do about about travel publishing in the era of TripAdvisor.com, and WikiTravel (another source of user-generated tourist information), and so on. How can a business possibly hope to compete with the community it hopes to profitably serve? When the question is put like this, it seems insoluable. But that simply indicates that the premise is flawed. This is not an us-versus-them situation, and here’s the key: the community, any community, respects expertise that doesn’t attempt to put on the airs of absolute authority. That travel publisher has built up an enormous reservoir of goodwill and brand recognition, and, simply by changing its attitude, could find a profitable way to work with the community. Publishers are no longer treated like Moses, striding down from Mount Sinai, commandments in hand. Publishing is a conversation, a deep engagement with the community of interest, where all parties are working as hard as they can to improve the knowledge and effectiveness of the community as a whole.

That simple transition from shoveling books out the door, into a community of knowledge building, has far reaching consequences. The business must refashion its own editorial processes and sensibilities around the community. Some of the job of winnowing the wheat from the chaff must be handed to the community, because there’s far too much for the editors to handle on their own. Yet the editors must be able to identify the best work of the community, and give that work pride of place, in order to improve the perceived value their role within the community.

Does this mean that the travel guide book is dead? A book is not dynamic or flexible, unlike a website. But neither does a book need batteries or an internet connection. Books have evolved through half a millennium of use to something that we find incredibly useful – even when resources are available online, we often prefer to use books. They are comfortable and very portable.

The book itself may be changing. It may not be something that is mass produced in lots of tens of thousands; rather, it may be individually printed for a community member, drawn from their own needs and interests. It represents their particular position and involvement, and is thus utterly personal. The technology for single-run publishing is now widespread; it isn’t terribly to print a single copy of a book. When that book can reflect the best editorial efforts of a brand known for high-quality travel publications plus the very best of the reviews and tips offered by an ever-growing community of travelers, it becomes something greater than the sum of its parts, a document in progress, an on-going evolution toward greater utility. It is an encapsulation of a conversation at a particular moment in time, necessarily incomplete, but, for that reason, intensely valuable.

Conversation is the mode not just for business communications, but for all business in the 21st century. Businesses which can not seize on the benefits of communication with the communities they serve will simply be swept aside (like newspapers) by communities in conversation. It is better to be in front of that wave, leading the way, than to drown in the riptide. But this is not an easy transition to make. It involves the fundamental rethinking of business practices and economic models. It’s a choice that will confront every business, everywhere, sometime in the next few years.

Case Three: Delisted

My final case study involves a recent client of mine, a very large university in New South Wales. I was invited in by the Director of Communications, to consult on a top-down redesign of the university’s web presence. After considerable effort an expenditure, the university had learned that their website was more-or-less unusable, particularly when compared against its competitors. It took users too many clicks to find the information they wanted, and that information wasn’t collated well, forcing visitors to traverse the site over and over to find the information they might want on a particular program of study. The new design would streamline the site, consolidate resources, and help prospective students quickly locate the information they would need to make their educational decisions.

That was all well and good, but a cursory investigation of web usage at the university indicated a larger and more fundamental problem: students had simply stopped using the online resources provided by the university, beyond the bare minimum needed to register for classes. The university had failed to keep up with innovations in the Web, falling dramatically out-of-step with its student population, who are all deeply engaged in emailing, social networking, blogging, photo sharing, link sharing, video sharing, and crowdsourcing. Even more significantly, the faculty of the university had set up many unauthorized web sites – using university computing resources – to provide web services that the university had not been able to offer. Both students and faculty had “left the farm” in search of the richer pastures found outside the carefully maintained walls of university computing. This collapse in utility has led to a “vicious cycle,” for the less the student or faculty member uses university resources, the less relevant they become, moving in a downward spiral which eventually sees all of the important knowledge creation processes of the university happening outside its bounds.

As the relevant information about the university (except what the university says about itself) escapes the confines of university resources, another serious consequence emerges: search engines no longer put the university at the top of search queries, simply because the most relevant information about the university is no longer hosted by the university. The organization has lost control of the conversation because it neglected to stay engaged in that conversation, tracking where and how its students and faculty were using the tools at hand to engage themselves in the processes of learning and knowledge formation. A Google search on a particular programme at the university could turn up a student’s assessment of the program as the first most relevant result, not the university’s authorized page.

This is a bigger problem than the navigability of a website, because it directly challenges the university’s authority to speak for itself. In the United States, the website RateMyProfessors.com has become the bane of all educational institutions, because students log onto the site and provide (reasonably) accurate information about the pedagogical capabilities of their instructors. An instructor who is a great researcher but a lousy teacher is quickly identified on this site, and students steer clear, having learned from their peers the pitfalls of a bad decision. On the other hand, students flock to lectures by the best lecturers, and these professors become hot items, either promoted to stay in place, or lured away by strong counter-offers. The collective intelligence of the community is running the show now, and that voice will only become stronger as better tools are developed to put it to work.

What could I offer as a solution for my client? All I could do was proscribe some bitter medicine. Yes, I told them, go forward with the website redesign – it is both necessary and useful. But I advised them to use that redesign as a starting point for a complete rethink of the services offered by the university. Students should be able to blog, share media, collaborate and create knowledge within the confines of the university, and it should be easier to do that – anywhere – than the alternative. Only when the grass is greener in the paddock will they be able to bring the students and faculty back onto the farm.

Furthermore, I advised the university to create the space for conversation within the university. Yes, some of it will be defamatory, or vile, or just unpleasant to hear. But the alternative – that this conversation happens elsewhere, outside of your ability to monitor and respond to it – would eventually prove catastrophic. Educational institutions everywhere – and all other institutions – are facing similar choices: do they ignore their constituencies or engage with them? Once engaged, how does that change the structure and power flows within their institutions? Can these institutions reorganize themselves, so that they become more permeable, pliable and responsive to the communities which they serve?

One again, these are not easy questions to answer. They touch on the fundamental nature of institutions of all varieties. A commercial organization has to confront these same questions, though the specifics will vary from organization to organization. The larger an organization grows, the louder the cry for conversation grows, and the more pressing its need. The largest institutions in Australia are most vulnerable to this sudden change in attitudes, because here it is most likely that sudden self-organizations within the body politic will rise to challenge them.

Conclusion: Over?

As you can see, the same themes appear and reappear in each of these three case studies. In each case some industry sector or institution confronts a pervasively networked public which can out-think, out-maneuver and massively out-compete an institution which formed in an era before the rise of the network. The balance of power has shifted decisively into the hands of the networked public.

The natural reaction of institutions of all stripes is to resist these changes; institutions are inherently conservative, seeking to cling to what has worked in the past, even if the past is no longer any guide to the future. Let me be very clear on this point: resistance is futile, and worse, the longer you resist, the stronger the force you will confront. If you attempt to dam up the tide of change, you will only ensure that the ensuing deluge will be that much greater. The pressure is rising; we are already pervasively networked in Australia, with nearly every able adult owning a mobile phone, with massive and growing broadband penetration, and with an increasing awareness that communities can self-organize to serve their own needs.

Something’s got to give. And it’s not going to be the public. They can’t be whipped or cowed or forced back into antique behaviors which no longer make sense to them. Instead, it is up to you, as business leaders, to embrace the public, engaging them in a continuous conversation that will utterly transform the way you do business.

No business is ever guaranteed success, but unless you embrace conversation as the essential business practice of the 21st century, you will find someone else, more flexible and more open, stealing your business away. It might be a competitor, or it might be your customers themselves, fed up with the old ways of doing business, and developing new ways to meet their own needs. Either way, everything is about to change.