What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Dense and Thick

I: The Golden Age

In October of 1993 I bought myself a used SPARCstation.  I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration.  I also had some ideas about coding networking protocols for shared virtual worlds.  Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet.  Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference.  I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there.  Not enough content to make it really interesting.  The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968.  Without sufficient content, hypertext systems are fundamentally uninteresting.  Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage.  To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive.  Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party.  The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing.  Over the course of the last week of October 1993, I visited every single one of those Websites.  Then I was done.  I had surfed the entire World Wide Web.  I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days.  Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites.  Yet even so, the Web had the capacity to suck you in.  I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly.  This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it.  At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party.  This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco.  People would come to perform, create, demonstrate, and spectate.  I decided I would show these people this new-fangled thing I’d become obsessed with.  So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?”  They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject.  They’d be delighted, and begin to explore.  At no point did I say, “This is the World Wide Web.”  Nor did I use the word ‘hypertext’.  I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution.  Was I responsible for that?  I’d like to think so, but I reckon San Francisco was a bit of a nexus.  I wasn’t the only one exploring the Web.  That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm?  How about you type in ‘www.hotwired.com’?”  Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online.  Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything.  I didn’t have to tell Steuer, and he didn’t have to tell me.  We knew.  And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire.  It was, and is, all things to all people.  This makes it the perfect love machine – nothing can confirm your prejudices better than the Web.  It also makes the Web a very pretty hate machine.  It is the reflector and amplifier of all things human.  We were completely unprepared, and for that reason the Web has utterly overwhelmed us.  There is no going back.  If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web.  Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide.  The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection.  The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference.  We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible.  But this age is drawing to a close.  Two recent developments will, in retrospect, be seen as the beginning of the end.  The first of these is the transformation of the oldest medium into the newest.  The book is coextensive with history, with the largest part of what we regard as human culture.  Until five hundred and fifty years ago, books were handwritten, rare and precious.  Moveable type made books a mass medium, and lit the spark of modernity.  But the book, unlike nearly every other medium, has resisted its own digitization.  This year the defenses of the book have been breached, and ones and zeroes are rushing in.  Over the next decade perhaps half or more of all books will ephemeralize,  disappearing into the ether, never to return to physical form.  That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate.  Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not.  The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us.  It presents the Web as a surface, nothing more.  iPad is a portal into the human universe, stripped of everything that is a computer.  It is emphatically not a computer.  Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come.  But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form.  But the human universe is not the whole universe.  We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture.  But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning.  We know this as one of the basic characteristics of humanness.  From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness.  Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world.  Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world.  This process actually provides the mechanism by which the world comes to make sense to us.  If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself.  But none of us can look at a thing and be completely innocent about its hidden meanings.  They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness.  For those of us not in such a blessed state, the material world has a subconscious component.  Everything means something.  Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific.  Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it.  It is always there, but rarely spoken of.  That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual.  You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years.  That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities.  Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision.  The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world.  At present, AR is flashy, but not at all useful.  It’s about to make a transition.  It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience.  These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit.  Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One:  The Book

Last year I read a wonderful book.  The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century.  By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago.  That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text.  When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization.  It wants to be digitized, because it has always been a message, encoded.  When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on.  You’d be able to make a well-briefed decision on whether this book is the right book for you.  Simple.  In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore.  Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth.  It’s this intension that the device has to support.  As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person.  My fragments will be sewn together with my threads, yours with your threads.  The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection.  The more connected everything becomes, the less likely we are prone to linearity.  We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext.  The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning.  The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial.  As such it is pulled in both directions at once.  It wants to be liberated, but will be utterly destroyed in that liberation.  The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese.  Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce.  Today I’d walk up to the meat case and throw a random package into my shopping trolley.  If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close.  I might also check to see how much fat is in the mince.  Or perhaps it’s grass-fed beef.  Or organically grown.  All of this information is offered up on the label placed on the package.  And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work.  Go through the supermarket to the distributor, through the distributor to the abattoir,  through the abattoir to farmer, through the farmer to the animal itself.  Was it healthy?  Where was it slaughtered?  Is that abattoir healthy?  (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.)  Was it fed lots of antibiotics in a feedlot?  Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince?  How much CO2 was created?  How much methane?  How much water was consumed?  These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet.  Without a system like this, it is essentially impossible.  With such a system it can potentially become easy.  As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince?  And its nutritional value?  I should be tracking those as well – or rather, my device should – so that I can maintain optimal health.  I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium.  Something should be keeping track of this.  Something that can watch and record and use that recording to build a model.  Something that can connect the real world of objects with the intangible set of goals that I have for myself.  Something that could do that would be exceptionally desirable.  It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves.  It’s an idea so simple it is completely self-evident.  We won’t need to convince anyone of this, to sell them on the truth of it.  They will simply ask, ‘When can I have it?’  But there’s more.  My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three:  Medicine

Four months ago, I contracted adult-onset chickenpox.  Which was just about as much fun as that sounds.  (And yes, since you’ve asked, I did have it as a child.  Go figure.)  Every few days I had doctors come by to make sure that I was surviving the viral infection.  While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high.  He suggested that I go on Micardis, a common medication for hypertension.  I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried.  Medicines are never perfect; they work for a certain large cohort of people.  For others they do nothing at all.  For a far smaller number, they might be toxic.  So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP.  He did not know my medical history.  He did not know the history of the other medications I had been taking.  All he knew was what he saw when he walked into my flat.  That could be a recipe for disaster.  Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems.  This is well known.  It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive.  Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption.  But that’s a model which is precisely backward.  While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves.  I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500.  Well within the range of your typical medical test.  Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs.  Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others.  Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive.  It is our interface to ourselves, and in that becomes an object of almost unimaginable importance.  In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible.  It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships.  It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network.  There are already bits and pieces of much of this in place.  It is a revolution waiting to happen.  That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III:  The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web.  He’s talking about the Semantic Web.”  And you’re right, I am talking about the Semantic Web.  But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another.  What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it.  That’s the vital difference which made the Web such a success in 1994 and 1995.  And it’s about to happen once again.

But we are starting from near zero.  Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat.  I can not.  I can not Google for the contents of my home.  There is no place to put that information, even if I had it, nor systems to put that information to work.  It is exactly like the Web in 1993: the lights on, but nobody home.  We have the capability to conceive of the world-as-a-database.  We have the capability to create that database.  We have systems which can put that database to work.  And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need.  But we have precious little connecting these three.  These are not businesses that exist yet.  We have not brought the real world into our conception of the Web.  That will have to change.  As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison.  There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple.  As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution.  I doubt it will be the integration of Wikipedia with a mobile camera.  It will be something much more immediate.  Much more concrete.  Much more useful.  Perhaps something concerned with health.  Or with managing your carbon footprint.  Those two seem the most obvious to me.  But the real revolution will probably come from a direction no one expects.  It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution.  There was nothing special about San Francisco back in 1993 and 1994.  But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web.  Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands.  You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects.  Or, you can wait for someone else to come along and do it.  Because someone inevitably will.  Every day, the pressure grows.  The real world is clamoring to crawl into cyberspace.  You can open the door.

Synopsis: Sharing :: Hyperconnectivity

The Day TV Died

On the 18th of October in 2004, a UK cable channel, SkyOne, broadcast the premiere episode of Battlestar Galactica, writer-producer Ron Moore’s inspired revisioning of the decidedly campy 70s television series. SkyOne broadcast the episode as soon as it came off the production line, but its US production partner, the SciFi Channel, decided to hold off until January – a slow month for television – before airing the episodes. The audience for Battlestar Galactica, young and technically adept, made digital recordings of the broadcasts as they went to air, cut out the commercials breaks, then posted them to the Internet.

For an hour-long television programme, a lot of data needs to be dragged across the Internet, enough to clog up even the fastest connection. But these young science fiction fans used a new tool, BitTorrent, to speed the bits on their way. BitTorrent allows a large number of computers (in this case, over 10,000 computers were involved) to share the heavy lifting. Each of the computers downloaded pieces of Battlestar Galactica, and as each got a piece, they offered it up to any other computer which wanted a copy of that piece. Like a forest of hands each trading puzzle pieces, each computer quickly assembled a complete copy of the show.

All of this happened within a few hours of Battlestar Galactica going to air. That same evening, on the other side of the Atlantic, American fans watched the very same episode that their fellow fans in the UK had just viewed. They liked what they saw, and told their friends, who also downloaded the episode, using BitTorrent. Within just a few days, perhaps a hundred thousand Americans had watched the show.

US cable networks regularly count their audience in hundreds of thousands. A million would be considered incredibly good. Executives for SciFi Channel ran the numbers and assumed that the audience for this new and very expensive TV series had been seriously undercut by this international trafficking in television. They couldn’t have been more wrong. When Battlestar Galactica finally aired, it garnered the biggest audiences SciFi Channel had ever seen – well over 3 million viewers.

How did this happen? Word of mouth. The people who had the chops to download Battlestar Galactica liked what they saw, and told their friends, most of whom were content to wait for SciFi Channel to broadcast the series. The boost given the series by its core constituency of fans helped it over the threshold from cult classic into a genuine cultural phenomenon. Battlestar Galactica has become one of the most widely-viewed cable TV series in history; critics regularly lavish praise on it, and yes, fans still download it, all over the world.

Although it might seem counterintuitive, the widespread “piracy” of Battlestar Galactica was instrumental to its ratings success. This isn’t the only example. BBC’s Dr. Who, leaked to BitTorrent by a (quickly fired) Canadian editor, drummed up another huge audience. It seems, in fact, that “piracy” is good. Why? We live in an age of fantastic media oversupply: there are always too many choices of things to watch, or listen to, or play with. But, if one of our friends recommends something, something they loved enough to spend the time and effort downloading, that carries a lot of weight.

All of this sharing of media means that the media titans – the corporations which produce and broadcast most of the television we watch – have lost control over their own content. Anything broadcast anywhere, even just once, becomes available everywhere, almost instantaneously. While that’s a revolutionary development, it’s merely the tip of the iceberg. The audience now has the ability to share anything they like – whether produced by a media behemoth, or made by themselves. YouTube has allowed individuals (some talented, some less so) reach audiences numbering in hundreds of millions. The attention of the audience, increasingly focused on what the audience makes for itself, has been draining ratings away from broadcasters, a drain which accelerates every time someone posts something funny, or poignant, or instructive to YouTube.

The mass media hasn’t collapsed, but it has been hollowed out. The audience occasionally tunes in – especially to watch something newsworthy, in real-time – but they’ve moved on. It’s all about what we’re saying directly to one another. The individual – every individual – has become a broadcaster in his or her own right. The mechanics of this person-to-person sharing, and the architecture of these “New Networks”, are driven by the oldest instincts of humankind.

The New Networks

Human beings are social animals. Long before we became human – or even recognizably close – we became social. For at least 11 million years, before our ancestors broke off from the gorillas and chimpanzees, we cultivated social characteristics. In social groups, these distant forbears could share the tasks of survival: finding food, raising young, and self-defense. Human babies, in particular, take many years to mature, requiring constantly attentive parenting – time stolen away from other vital activities. Living in social groups helped ensure that these defenseless members of the group grew to adulthood. The adults who best expressed social qualities bore more and healthier children. The day-to-day pressures of survival on the African savannahs drove us to be ever more adept with our social skills.

We learned to communicate with gestures, then (no one knows just how long ago) we learned to speak. Each step forward in communication reinforced our social relationships; each moment of conversation reaffirms our commitment to one another, every spoken word an unspoken promise to support, defend and extend the group. As we communicate, whether in gestures or in words, we build models of one another’s behavior. (This is why we can judge a friend’s reaction to some bit of news, or a joke, long before it comes out of our mouths.) We have always walked around with our heads full of other people, a tidy little “social network,” the first and original human network. We can hold about 150 other people in our heads (chimpanzees can manage about 30, gorillas about 15, but we’ve got extra brains they don’t to help us with that), so, for 90% of human history, we lived in tribes of no more than about 150 individuals, each of us in constant contact, a consistent communication building and reinforcing bonds which would make us the most successful animals on Earth. We learned from one another, and shared whatever we learned; a continuity of knowledge passed down seamlessly, generation upon generation, a chain of transmission that still survives within the world’s indigenous communities. Social networks are the gentle strings which connect us to our origins.

This is the old network. But it’s also the new network. A few years ago, researcher Mizuko Ito studied teenagers in Japan, to find that these kids – all of whom owned mobile telephones – sent as many as a few hundred text messages, every single day, to the same small circle of friends. These messages could be intensely meaningful (the trials and tribulations of adolescent relationships), or just pure silliness; the content mattered much less than that constant reminder and reinforcement of the relationship. This “co-presence,” as she named it, represents the modern version of an incredibly ancient human behavior, a behavior that had been unshackled by technology, to span vast distances. These teens could send a message next door, or halfway across the country. Distance mattered not: the connection was all.

In 2001, when Ito published her work, many dismissed her findings as a by-product of those “wacky Japanese” and their technophile lust for new toys. But now, teenagers everywhere in the developed world do the same thing, sending tens to hundreds of text messages a day. When they run out of money to send texts (which they do, unless they have very wealthy parents), they simply move online, using instant messaging and MySpace and other techniques to continue the never-ending conversation.

We adults do it too, though we don’t recognize it. Most of us who live some of our lives online, receive a daily dose of email: we flush the spam, answer the requests and queries of our co-workers, deal with any family complaints. What’s left over, from our friends, more and more consists of nothing other than a link to something – a video, a website, a joke – somewhere on the Internet. This new behavior, actually as old as we are, dates from the time when sharing information ensured our survival. Each time we find something that piques our interest, we immediately think, “hmm, I bet so-and-so would really like this.” That’s the social network in our heads, grinding away, filtering our experience against our sense of our friends’ interests. We then hit the “forward” button, sending the tidbit along, reinforcing that relationship, reminding them that we’re still here – and still care. These “Three Fs” – find, filter and forward – have become the cornerstone of our new networks, information flowing freely from person-to-person, in weird and unpredictable ways, unbounded by geography or simultaneity (a friend can read an email weeks after you send it), but always according to long-established human behaviors.

One thing is different about the new networks: we are no longer bounded by the number of individuals we can hold in our heads. Although we’ll never know more than 150 people well enough for them to take up some space between our ears (unless we grow huge, Spock-like minds) our new tools allow us to reach out and connect with casual acquaintances, or even people we don’t know. Our connectivity has grown into “hyperconnectivity”, and a single individual, with the right message, at the right time, can reach millions, almost instantaneously.

This simple, sudden, subtle change in culture has changed everything.

The Nuclear Option

On the 12th of May in 2008, a severe earthquake shook a vast area of southeast Asia, centered in the Chinese state of Sichuan. Once the shaking stopped – in some places, it lasted as long as three minutes – people got up (when they could, as may lay under collapsed buildings), dusted themselves off, and surveyed the damage. Those who still had power turned to their computers to find out what had happened, and share what had happened to them. Some of these people used so-called “social messaging services”, which allowed them to share a short message – similar to a text message – with hundreds or thousands of acquaintances in their hyperconnected social networks.

Within a few minutes, people on every corner of the planet knew about the earthquake – well in advance of any reports from Associated Press, the BBC, or CNN. This network of individuals, sharing information each other through their densely hyperconnected networks, spread the news faster, more effectively, and more comprehensively than any global broadcaster.

This had happened before. On 7 July 2005, the first pictures of the wreckage caused by bombs detonated within London’s subway system found their way onto Flickr, an Internet photo-sharing service, long before being broadcast by BBC. A survivor, waking past one of the destroyed subway cars, took snaps from her mobile and sent them directly on to Flickr, where everyone on the planet could have a peek. One person can reach everyone else, if what they have to say (or show) merits such attention, because that message, even if seen by only one other person, will be forwarded on and on, through our hyperconnected networks, until it has been received by everyone for whom that message has salience. Just a few years ago, it might have taken hours (or even days) for a message to traverse the Human Network. Now it happens a few seconds.

Most messages don’t have a global reach, nor do they need one. It is enough that messages reach interested parties, transmitted via the Human Network, because just that alone has rewritten the rules of culture. An intemperate CEO screams at a consultant, who shares the story through his network: suddenly, no one wants to work for the CEO’s firm. A well-connected blogger gripes about problems with his cable TV provider, a story forwarded along until – just a half-hour later – he receives a call from a vice-president of that company, contrite with apologies and promises of an immediate repair. An American college student, arrested in Egypt for snapping some photos in the wrong place at the wrong time, text messages a single word – “ARRESTED” – to his social network, and 24 hours later, finds himself free, escorted from jail by a lawyer and the American consul, because his network forwarded this news along to those who could do something about his imprisonment.

Each of us, thoroughly hyperconnected, brings the eyes and ears of all of humanity with us, wherever we go. Nothing is hidden anymore, no secret safe. We each possess a ‘nuclear option’ – the capability to go wide, instantaneously, bringing the hyperconnected attention of the Human Network to a single point. This dramatically empowers each of us, a situation we are not at all prepared for. A single text message, forwarded perhaps a million times, organized the population of Xiamen, a coastal city in southern China, against a proposed chemical plant – despite the best efforts of the Chinese government to sensor the message as it passed through the state-run mobile telephone network. Another message, forwarded around a community of white supremacists in Sydney’s southern suburbs, led directly to the Cronulla Riots, two days of rampage and attacks against Sydney’s Lebanese community, in December 2005.

When we watch or read stories about the technologies of sharing, they almost always center on recording companies and film studios crying poverty, of billions of dollars lost to ‘piracy’. That’s a sideshow, a distraction. The media companies have been hurt by the Human Network, but that’s only a minor a side-effect of the huge cultural transformation underway. As we plug into the Human Network, and begin to share that which is important to us with others who will deem it significant, as we learn to “find the others”, reinforcing the bonds to those others every time we forward something to them, we dissolve the monolithic ties of mass media and mass culture. Broadcasters, who spoke to millions, are replaced by the Human Network: each of us, networks in our own right, conversing with a few hundred well-chosen others. The cultural consensus, driven by the mass media, which bound 20th-century nations together in a collective vision, collapses into a Babel-like configuration of social networks which know no cultural or political boundaries.

The bomb has already dropped. The nuclear option has been exercised. The Human Network brought us together, and broke us apart. But in these fragments and shards of culture we find an immense vitality, the protean shape of the civilization rising to replace the world we have always known. It all hinges on the transition from sharing to knowing.

Synopsis: Introduction (The Fisher King)

For at least the last thousand years, fishermen trawling off the southern Indian state of Kerala have faced a perpetual question: which market will bring them the best price for their fish? The fishermen have a broad selection of ports where they can unload and sell their catch, but if too many boats pull into a port, the market, oversupplied with fish, won’t pay the fishermen enough even to cover their costs. This market failure has kept the fishermen of Kerala perpetually poor, eking out a subsistence-level wage, despite the rich harvest from the seas.

In 1997, as India began its sweeping ascent into industrialization, the newly-deregulated telecommunications industry blanketed the country with mobile transceiver towers. Some of these towers, strung along the Kerala shoreline, could project their signals up to 25 km out to sea, well within the range of the fishermen on their sturdy dhows.

Although mobile telephony isn’t expensive in India, relative to incomes, it’s extremely dear. A typical cheap mobile telephone – such as the Nokia 1100, the most popular consumer electronics device in history – costs the equivalent of several thousand dollars. One wealthy fisherman did purchase a mobile telephone, and brought it with him to sea. At some point, he communicated with the mainland: perhaps a family call. During the call, he learned of a market desperately in need of fish. He set his sails for that port, and made a tidy profit. The next day, he made a few calls into shore, and again learned where he might sell his catch for the highest price. A seeing man in the kingdom of the blind, this fisherman very quickly earned far more money than any of his competitors.

More than any other species, human beings copy the behaviors of our peers; a recent scientific study showed that young chimpanzees scored better than toddlers on cognitive tasks, but that toddlers proved far more adept at ‘aping’ the behavior of others. We are wired to observe, learn from and copy the behavior of others. The Kerala fisherman noted the success of this ‘king fisher,’ and, despite the staggering cost – equivalent to a month’s income – purchased their own mobiles. Within a few months, all of Kerala’s fishermen used mobiles to coordinate their sales into the Kerala fishmarkets. Each market had just the right amount of fish, selling at just the right price, to guarantee each fisherman a tidy profit. A thousand year-old problem had been solved – and the fishermen now earn so much more money that those very expensive mobile telephones recoup their costs in just two months!

A decade ago, half the world had never made a telephone call. Today, over half the world owns a mobile telephone. Study after study indicate that the vast swath of the world’s medium poor (those who earn anything from a few dollars to a few tens of dollars a day) dramatically improve their earning potential with a mobile telephone. Microfinance organizations, such as Grameen Bank, founded by Nobel Peace Prize-winning economist Muhammad Yunnis, have established their own telecommunications companies, geared to serve the needs of the poor, knowing that connectivity is one of the keys to solving the perpetual problem of poverty. Meanwhile, across Africa and Asia, billions who had been left behind in drive to globalization, purchase a mobile knowing it to be their passport to economic advancement.

Why is connectivity so important to success? You may as well ask if a deaf-mute could participate in an auction. We need to be able to communicate to participate in The Human Network; as we better our ability to communicate, we reap the benefits of a deeper participation. All of this is old, old knowledge, buried deep within our cultures, our bodies and our brains, and it has suddenly accelerated and amplified, wiring us into The Human Network, connecting us directly to the rest of humanity.

We can alert the entire planet with a text message, create a market with just a word, scour the best minds on Earth in search of answers to our questions. All of this, unexpected by economists, sociologists or technologists, is now available to the majority of humanity, and – within just a few years – will have encompassed all but the billion most desperately poor individuals. As we pile onto The Human Network, exploring our newfound ability to communicate across every barrier nature and culture have placed in our path, we consistently increase our effectiveness, watching and copying our peers – just as the Kerala fishermen did.

We can chart our path to into this startling future by taking a good look at the present. Many of the forces shaping and benefits delivered by The Human Network have already appeared; some in embryonic form, some now fully grown. We can communicate, and share with one another; we can pool our shared knowledge resources to increase our intelligence, improving our ability to make good decisions; once smarter, we can band together – across nations, across cultures, across the world – to achieve our economic, social and political goals. All of this is already happening, and all of it will change everything in the human world.

A (Modest) Proposal

In March of 2008, someone – probably in India – bought a mobile telephone. By itself, that wouldn’t be particularly noteworthy, yet it represented a watershed: the halfway mark of humanity’s accelerating interconnection. Over 3.5 billion mobile subscribers, or one person in two, are wired into the global network. Most of these people live in the “developing” countries, where incomes average just a few dollars a day. Desperately poor by the standards of the “developed” world, why would these people waste their meager resources on something that, to most of us, seems little more than a useful toy?

In the developed world, mobile phones are completely ubiquitous: only toddlers, the very oldest seniors, and technophobes have resisted their allure. Parents give their children mobiles with global satellite tracking features, so they can search the web to find out where their kids are – and snoop into where they’ve been. Adults use mobile telephones to smooth the frictions of social life: in the age of the mobile, one can phone ahead. No one is late anymore, just delayed. Your productive business life can follow you anywhere – into bed, on vacation, even into the middle of an argument. We enjoy – and suffer through – a life of seamless connectivity.

This is new, and it is very important.

For the nearly two hundred thousand years of human presence on Earth, our lives have been bounded by how far we could throw our voices. Yodelers once scaled Alpine mountaintops to sing to the valleys below; today, a communications satellite, perched 25,000 miles above the equator, can reach half the planet. During the 20th century, radio transmitters (which, like yodelers, started off on mountaintops, but later migrated into orbit) transmitted one message to many receivers. We could hear and then see things that happened far away from our own ears and eyes, and know more about what happened in Washington D.C., on any given day, than what took place in the next town over. As we entered the 21st century, that comfortable (if paradoxical) relationship to the world beyond the reach of our own voices, which most of us had known for most of our lives, suddenly disintegrated. People began to talk with one another.

Nothing at all surprising about that: people have always talked with one another. Communication is arguably the defining feature of homo sapiens sapiens. We are the species that speaks. It is so much of what we are that vast sections of our brains are given over to the understanding of language. Children spend most of their first few years of life, their developing brains working overtime, intently studying every word that comes out of their parents’ mouths, learning to find meaning amidst all those strange sounds.

As a child practices her first few words, she receives encouragement and praise from her parents – who often can’t understand a word she’s saying, but nonetheless applaud every attempt. As she rises into mastery, first with a few simple words, then short phrases, then full-blown sentences, rich with meaning, she joins the “human network,” the age-old web of relationships which define humanity.

Communication shapes us in nearly every conceivable way. If we can not communicate, we are cut off from the common life of our species, and could not hope to survive. But, once we can communicate – with parents and peers – we begin to develop an ever-deepening web of connections with the people around us. This web, formally known as a “social network”, is so important to us that even more of our brain is given over to tending and managing our social networks than the parts used to understand language. Nearly all of our “prefrontal cortex” – the part of the brain which sits directly behind our foreheads – seems to be principally occupied with keeping us well-connected to our fellows.

Until about 10,000 years ago, we lived in tribes, groupings of several interrelated families who hunted and gathered their way across the landscapes of Africa, Asia, Europe and Australia. Tribes grew and shrank, through births and deaths, but never grew very large. A large tribe would divide into two smaller ones, along familial lines, and each would go their own way. The natural limit for tribes seems to be around 150 people – beyond that, the tribe always splinters. Why is this? That’s all the space we have in our brains. We can carry around a “mental picture” of about 150 people in our heads, but after that, we just run out of space. We can’t manage a social network any larger than that. We don’t have enough brains.

Fast-forward a hundred centuries: more than half of us now live in cities, not tribes. In our day-to-day lives we don’t feel immediately connected to a hundred and fifty other people. We have close relationships to our families, a handful of friends, and a few colleagues. We are more individual and more isolated than at any time in our common history as a species, yet the largest part of our brain tirelessly works toward building strong connections with others. Over the 20th century, we filled this vacuum with false relations: fans and stalkers, who so idolize their objects of affection (musicians, actors, politicians, etc.) that they built a false idol into their social networks. Ultimately unsatisfying, but better than a widening gyre of emptiness inside our heads.

Our ancestors in the family of man have used tools for at least 2 million years to increase our strength, and extend our capabilities. An obsidian knife is a far better cutting tool than our teeth, and a bone needle better suited to its task than the most nimble fingertips. We domesticated Aurochs (the ancestor of the ox) ten thousand years ago, using their strength to till our fields and carry our loads – and human capabilities took another huge leap forward.

Two hundred years ago, the steam engine multiplied human strength almost infinitely, and produced the Industrial Revolution. As railroads stitched their way across the planet, man could travel faster than a galloping horse; with a steam shovel, he could lift a load that all of Pharaoh’s slaves would have been crushed beneath; and with a telegraph, could he hear or be heard from one end of Earth to the other, in a matter of moments. Technologies are amplifiers; they take some innate human capability and reinforce it, far beyond human limits, until it seems almost an entirely new thing. However alien they might seem to us, technologies are simply the funhouse mirror reflection of ourselves.

Just now – within the last ten years, or thereabouts – we have invented tools which amplify our innate desire to strengthen our human networks. Our wholly human and ancient capacity for communication and connection, so long the poor stepchild of all our technological prowess, is finally coming into its own.

This changes everything, in utterly unexpected ways.

Fishermen in India use text messages to solve a thousand year-old problem with their fish markets, doubling their income; a teenager posts an party invitation to Facebook, and five hundred ‘friends’ show up to make trouble; repressive governments try to clamp down on dissent, only to find their latest outrage available for viewing on YouTube; a band of bloggers, undeterred by every dirty trick thrown at them by a slick bureaucracy, bring down the Attorney General of the United States. None of these singular events were in any way coordinated; no one at an imagined center was telling people to “do this” or “do that”. These things just happened, because our own capabilities as social beings in the human network are already so advanced, and so powerful that, when amplified – even the tiniest bit – we become potent almost beyond imagining.

The world’s vast swath of medium poor put mobile telephones to work and dramatically increase their ability to earn a living, using text messages to multiply the effectiveness of the human networks that we have all used, since time out of mind, to make our way in the world. That’s why a mobile phone is the new “must have” device for everyone on Earth: it’s a tool that helps the poor far more than it helps the rich, because, for the first time, they’re wired into the global human network. They already know how to use these networks – we all do – but the mobile telephone extends their reach, and amplifies their capabilities. This new “globalization” isn’t about spreading franchises of McDonald’s and Starbucks – it’s about a farmer in Kenya being able to call ahead to find out which market offers the best price for his maize crop.

Repeat that individual example a few billion times, and the startling power of the human network begins to reveal itself. We are finding new ways to communicate, connect and improve our lives, each of us carefully watching one other, each of us copying the best of what we see in the behavior of our peers, and applying it to our own lives. As our reach is extended, so is our ability to learn from one another. This global pooling of expertise – or, “hyperintelligence” – leads directly to the phenomenal success of Wikipedia, an online encyclopedia created by millions of individual contributions, each giving the best of what they know, and, in return enjoying the fruits of a planet full of smart people. For just a small contribution, the rewards are so disproportionate (like putting a single chip down on a roulette wheel, and getting the whole casino in return) that Wikipedia defines the first new model for human knowledge creation in at least a thousand years. Wikipedia helps us all to become smarter and more effective, because, by sharing the wealth of knowledge in each of our heads, we help one another make better decisions.

The more we learn to share through the human network, the more powerful we become, both as individuals and in groups. This has a shadow side: a text message, forwarded throughout a community of White Supremacists, led to a race riot on a Sydney beach in December 1995; meanwhile, the loosely-affiliated groups who all call themselves ‘al Qaeda’ pool knowledge and resources in order to make their destabilizing acts of terror increasingly effective. Power is a two-edged sword, and most technologies can be used for good or ill.

At the same time, this new phenomenon of “hyperempowerment” – people using their newly-amplified capabilities in the human network – means that we’re not so easy to push around any more. Consumers can organize against nasty corporate behavior in moments; corporate executives nervously scan endless lists of comments on web sites, anxiously looking for signs of approaching trouble; governments regularly find their constituents running rings around them. The human network puts all of the power relationships that have dominated recent history into play; naturally, those with power are pushing back, but – as in the case of the record companies, who have tried to sue their customers into behaving legally – institutional power finds itself ever more effectively thwarted by diffuse and distributed efforts to oppose it.

The next decades of the 21st century will be dominated by the rise of the human network, as “hyper people power” rises up in unexpected, unpredicted, and sometimes unwelcome ways. The collision of our oldest skills with our newest tools points toward a radical transformation in human behavior and human culture. The energy released in this collision will empower all of us, threaten many of us, and force some of us to rethink our lives. In some ways, we are finally returning to our tribal roots; in other ways, we are, at long last, becoming a global family.

After two hundred years, during which man used machines to amplify his strength, and so shaped the world, we have finally turned that power inward, to reshape ourselves. The Human Network: Sharing, Knowledge and Power in the 21st Century tells the story of this epochal shift in civilization, in behavior, in humanity itself. In its 250 pages, it will paint the compelling and accessible picture of the tremendous changes underway, everywhere, in every nation, to every person, as we all become fully-fledged actors in the human network.