What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Nothing Special

I.

And so it begins.

Last week, YouTube began the laborious process of removing all clips of The Daily Show with Jon Stewart at the request of VIACOM, parent to Paramount Television, which runs Comedy Central, home to The Daily Show. This is no easy task; there are probably tens of thousands of clips of The Daily Show posted to YouTube. Not all of them are tagged well, so – despite its every effort – YouTube is going to miss some of them, opening themselves up to continuing legal action from VIACOM.

It is as all of YouTube’s users feared: now that billions of dollars are at stake, YouTube is playing by the rules. The free-for-all of video clip sharing which brought YouTube to greatness is now being threatened by that very success. Because YouTube is big enough to sue – part of Google, which has a market capitalization of over 160 billion dollars – it is now subject to the same legal restrictions on distribution as all of the other major players in media distribution. In other words, YouTube’s ability to hyperdistribute content has been entirely handicapped by its new economic vulnerability. Since this hyperdistribution capability is the quintessence of YouTube, one wonders what will happen. Can YouTube survive as its assets are slowly stripped away?

Mark Cuban’s warnings have come back to haunt us; Cuban claimed that only a moron would buy YouTube, built as it is on the purloined copyrights of others. Cuban’s critique overlooked the enormous value of YouTube’s of peer-produced content, something I have noted elsewhere. Thus, this stripping of assets will not diminish the value of YouTube. Instead, it will reveal the true wealth of peer-production.

In the past week I’ve used YouTube at least five times daily – but not to watch The Daily Show. I’ve been watching a growing set of political advertisements, commentary and mashups, all leading up to the US midterm elections. YouTube has become the forum for the sharing of political videos, and, while some of them are brazenly lifted from CNN or FOX NEWS, most are produced by the campaigns, and are intended to be hyperdistributed as widely as possible. Political advertising and YouTube are a match made in heaven. When political activism crosses the line into citizen journalism (such as in the disturbing clips of people being roughed up by partisan thugs) that too is hyperdistributed via YouTube. Anything that’s captured on a video camera, or television tuner, or mobile telephone can (and frequently does) end up on YouTube in a matter of minutes.

Even as VIACOM executed their draconian copyrights, the folly of their old-school thinking became ever more apparent. Oprah featured a segment on Juan Mann, Sick Puppies and their now-entirely-overexposed video. It’s been up on YouTube for five weeks, has now topped five million views, and four major record labels are battling for the chance to sign Sick Puppies to a recording contract. It reveals the fundamental paradox of hyperdistribution: the more something is shared, the more valuable it becomes. Take The Daily Show off of YouTube, and fewer people will see it. Fewer people will want to catch the broadcast. Ratings will drop off. And you run the risk of someone else – Ze Frank, perhaps, or another talented upstart – filling the gap.

Yes, Comedy Central is offering The Daily Show on their website, for those who can remember to go there, can navigate through the pages to find the show they want, can hope they have the right video software installed, etc. But Comedy Central isn’t YouTube. It isn’t delivering half of the video seen on the internet. YouTube has become synonymous with video the way Google has become synonymous with search. Comedy Central ignores this fact at its peril, because it’s relying on a change in audience behavior.

II.

Television producers are about to learn the same lessons that film studios and the recording industry learned before them: what the audience wants, it gets. Take your clips off of YouTube, and watch as someone else – quite illegally – creates another hyperdistribution system for them. Attack that system, and watch as it fades into invisibility. Those attacks will force it to evolve into ever-more-undetectable forms. That’s the lesson of music-sharing site Napster, and the lesson of torrent-sharing site Supernova. When you attack the hyperdistribution system, you always make the problem worse.

In its rude, thuggish way, VIACOM is asserting the primacy of broadcasting over hypercasting. VIACOM built an empire from television broadcasting, and makes enormous revenues from it. They’re unlikely to do anything that would encourage the audience toward a new form of distribution. At the same time, they’re powerless to stop that audience from embracing hyperdistribution. So now we get to see the great, unspoken truth of television broadcasting – it’s nothing special. Buy a chunk of radio spectrum, or a satellite transponder, or a cable provider: none of it gives you any inherent advantage in reaching the audience. Ten years ago, they were a lock; today, they’re only an opportunity. There are too many alternate paths to the audience – and the audience has too many paths to one another.

This doesn’t mean that broadcasting will collapse – at least not immediately. It does mean that – finally – there’s real competition. The five media megacorporations in the United States now have several hundred thousand motivated competitors. Only a few of these will reach the “gold standard” of high-quality production technique which characterizes broadcast media. The audience doesn’t care. The audience prizes immediacy, relevancy, accessibility, and above all, salience. There’s no way that five companies, however rich and productive, can satisfy the needs of an audience which has come to expect that it can get exactly what it wants, when it wants, wherever it wants. Furthermore, there’s no way to stop anything that gets broadcast by those companies from being hyperdistributed and added to the millions of available choices. You’d need to lock down every PC, every broadband connection, and every television in the world to maintain a level of control which, just a few years ago, came effortlessly.

VIACOM may sense the truth of this, even as they act against this knowledge. Rumors have been swirling around the net, indicating that YouTube and VIACOM have come to a deal, and that the clips will not be removed – this, while they’re still being deleted. VIACOM, caught in the inflection point between broadcasting and hypercasting, doesn’t fully understand where its future interests lie. In the meantime, it thrashes about as its lizard-brained lawyers revert to the reflexive habits of cease-and-desist.

III.

This week, after two years of frustration and failure, I managed to install and configure MythTV. MythTV is a LINUX-based digital video recorder (DVR) which has been in development for over four years. It has matured enormously in that time, but it still took every last one of my technical skills – plus a whole lot of newly-acquired ones – to get it properly set up. Even now, after some four days of configuration, I’m not quite finished. That puts MythTV miles out of the range of the average viewer, who just wants a box they can drop into their system, turn on, and play with. Those folks purchase a TiVo. But TiVo doesn’t work in Australia – at least, not without the same level of technical gymnastics required to install MythTV. If I had digital cable – spectacularly uncommon in Australia – I could use Foxtel iQ, a very polished DVR with multiple tuners, full program guide, etc. But I have all of that, right now, running on my PC, with MythTV.

I’ve never owned a DVR, though I have written about them extensively. The essential fact of the DVR is that it coaxes you away from television as a live medium. That’s an important point in Australia, where most of us have just five broadcast channels to pick from: frequently, there’s nothing worth watching. But, once you’ve set up the appropriate recording schedule on your DVR, the device is always filled with programming you want to watch. People with DVRs tend to watch 30% more television than those without, and they tend to enjoy it more, because they’re getting just the programmes they find most salient.

Last night – the first night of a relatively complete MythTV configuration – I went to attend a friend’s lecture, but left MythTV to record the evening’s news programmes. I came back in, and played the recorded programmes, but took full advantage of the DVRs ability to jump through the content. I skipped news stories I’d seen earlier in the day (plus all of the sport reportage), and reviewed the segments I found most interesting. I watched 2 hours of television in about 45 minutes, and felt immensely satisfied at the end, because, for the first time, I could completely command the television broadcast, shaping it to the demands of salience. This is the way TV should be watched, I realized, and I knew there’d be no going back.

My DVR has a lot in common with YouTube. Both systems skirt the law; in my case the programming schedules which I download from a community-hosted site are arguably illegal under Australian copyright law, and recording a program at all – either in the US or in Australia – is also illegal. (You don’t sue your audience, and you don’t waste your money suing a not-for-profit community site.) Both systems give me immediate access to content with enormous salience; I see just what I want, just when I want to. YouTube is home to peer-produced content, while the DVR houses professional productions, works that meet the “gold standard”. I have already begun to conceive of them as two halves of the same video experience.

It won’t be long before some enterprising hacker integrates the two meaningfully: perhaps a YouTube plugin for MythTV? (MythTV is a free and open source application, available for anyone to modify or improve.) Perhaps it will be some deal struck between the broadcasters and YouTube. Or perhaps both will occur. This would represent the kind of “convergence” much talked about in the late 1990s, and all but abandoned. Convergence has come; from my point of view it doesn’t matter whether I use MythTV or YouTube or their hybrid offspring. All I care about is watching the programmes that interest me. How they get delivered is nothing special.

Rearranging the Deck Chairs

I. Everything Must Go!

It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.

The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.

On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.

II. The Vanilla Republic

It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.

When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.

In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.

This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.

III. My Left Eyeball

Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.

Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.

While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.

I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.

Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.

The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.

One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.

Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.

Why Copyright Doesn’t Matter

 

If you overvalue possessions, people begin to steal.– Lao Tzu, Tao Te Ching
IAlthough the New York Times found the new film Alternative Freedom a sloppy, disjointed, jingoistic mess, the movie does break new ground, highlighing the growing threat to public expression posed by restrictive copyright laws and digital rights management technologies. Supporting the “copyfight” thesis – that copyright law is slowly strangling the public’s ability to sample, remix and redistribute the ideas sold to them by entertainment companies – Alternative Freedom ventures beyond these familiar tropes: as video game systems, mobile phones and even printer toner cartridges become ever-more restrictive in the way they operate, we’re being sold devices which dictate their own terms of use. Any deviation from that usage is, in effect, a violation of copyright law. With appropriate legal penalties.Coincidentally, this week the US Congress began to deliberate strong and almost draconian extensions to the nation’s copyright laws, adding odious criminal penalties for what have – until now – been civil violations. Large-scale, commercial violators of copyright have always been criminals; now even the casual user could become a felon for any redistribution of content under copyright. As peer-to-peer filesharing networks grow ever broader in scope, become ever more difficult to detect, and ever harder to disrupt and destroy, the pressure builds. In essence, this is the last legal gasp of the entertainment industry to maintain control over the distribution of their productions.I have previously discussed the futility of “economic censorship” – which this proposed law before Congress equates to – and I can see nothing in these new laws which will slow the inexorable slide to an era where any media distributed anywhere on the planet becomes instantly available everywhere on the planet, to everyone. This is the essence of “hyperdistribution,” a recently-discovered, newly-emergent quality of our communications networks. You can’t make a network that won’t hyperdistribute content throughout its span – or rather, if you did, it wouldn’t look anything like the networks we use today. It seems unlikely that we would suddenly replace our entire global network infrastructure with something that would give us significantly less capability. Yet this must happen, if the long march to hyperdistribution is to be stopped.IIThis is a war for eyeballs and audiences. An entertainment producer spends significant time and money carefully crafting content for a mass audience, expecting that audience to pay for the privilege of enjoying the production. This is possible only insofar as access to the content can be absolutely restricted. If the producer only makes physical prints of a film, and only shows it in a theatre where everyone has been thoroughly searched for any sort of recording device (these days, that list would include both mobile phones and iPods), they might be able to restrict piracy. But only if there are no digital intermediates of the film, no screeners for reviewers mailed out on DVDs, no digital print for projection in the latest whiz-bang movie theatres. As soon as there is any digital representation of the production, copies of it will begin to multiply. It’s in the nature of the bits to generate more and more copies of themselves. These bits eventually make their way onto the network, and hyperdistribution begins.There is, in this evaluation, an assumption that this content has value to an audience. Many films are made each year – in Hollywood, Bollywood, Hong Kong, and throughout the world – yet, most of the time, people don’t care to see them. Films are big, complex, and frequently flawed; there is no such thing as a perfect film, and, more often than not, a film’s flaws outweigh its strengths, so the film fails. This wasn’t an issue before the advent of television – before 1947, film was the only way to enjoy the moving image. Over the last sixty years, the film industry has learned how to accommodate television – with cable and free-to-air broadcasts of their films, and, most profitably, with the huge industry created by the VCR and the DVD. Even so, in the era of the VCR viewers had perhaps five or six channels of broadcast television to choose from. When the DVD was introduced, viewers had perhaps fifty or sixty channels to watch – more substantial, but still nothing to be entirely worried about. Now the number of potential viewing choices is essentially infinite. In a burst of exponential growth, the video sharing site YouTube is about to surpass CNN in web traffic, and in just one week went from 35 million videos viewed to over 40 million. That kind of growth is clearly unsustainable, but it’s also just as clearly indicative that YouTube is becoming a foundation web service, as significant as Google or Wikipedia. And this is why copyright doesn’t matter.IIIIt’s frequently noted that much of the content up on YouTube is presented in violation of someone else’s copyright. It might be little snippets from South Park, The Daily Show, or Saturday Night Live. The media megacorporations who control those copyrights are constantly in contact with YouTube, asking them to remove this content as quickly as it appears – and YouTube is happy to oblige them. But YouTube is subject to “swarm effects,” so as soon as something is removed, someone else, from somewhere else, posts it again. Anything that is popular has a life of its own – outside of its creator’s control – and YouTube has become the focal point to express this vitality.At the moment, many of the popular videos on YouTube fall into this category of content-in-violation-of-copyright. But not all of them. There’s plenty on YouTube which has been posted by people who want to share their work with others. A lot of this is instructional, informational, or just plain odd. It’s outside the mainstream, was never meant to be mainstream, and yet, because it’s up there, and because so many people are looking to YouTube for a moment’s diversion or enlightenment, it tends, over time, to find its audience. Once something has found just one member of its audience, it’s quickly shared throughout that person’s social network, and rapidly reaches nearly the entirety of that audience. That’s the find-filter-forward operation of a social network in an era of hyperdistribution and microaudiences. YouTube is enabling it. That’s why YouTube has gotten so popular, so quickly: it’s filling an essential need for the microaudience.Is there a place for professionally-produced content in an age of social networks and microaudiences? This is the big question, the question that no one can answer, because the answer can only emerge over time. Attention is a zero-sum: if I’m watching this video on YouTube, I’m not watching that TV show or movie. If I’m thoroughly caught up in the five YouTube links I get sent each day – which will quickly become fifty, then five hundred – how can I find any time to watch the next Hollywood special effects extravaganza? And why would I want to? It’s not what my friends are watching: they’ve sent me links to what they’re watching – and that’s on YouTube.So go ahead, Congress: kill the entertainment industry by doing their bidding. Let them lock their content up so completely that its utility – with respect to the network – approaches zero. If people can’t find-filter-forward content, it won’t exist for them. Lock something up, and it becomes less and less important, until no one cares about it at all. People are increasingly concerned with the media they can share freely, and this points to a future where the amateur trumps the professional, because the amateur understands the first economic principle of hyperdistribution: the more something is shared, the more valuable it becomes.

Going into Syndication

I.

Content. Everyone makes it. Everyone consumes it. If content is king, we are each the means of production. Every email, every blog post, every text message, all of these constitute production of content. In the earliest days of the web this was recognized explicitly; without a lot of people producing a lot of content, the web simply wouldn’t have come into being. Somewhere toward the end of 1995, this production formalized, and we saw the emergence of a professional class of web producers. This professional class asserts its authority over the audience on the basis of two undeniable strengths: first, it cultivates expertise; second, it maintains control over the mechanisms of distribution. In the early years of the Web, both of these strengths presented formidable barriers to entry. As we emerged from the amateur era of “pages about kitty-cats” into the branded web era of CNN.com, NYT.com, and AOL.com, the swarm of Internet users naturally gravitated to the high-quality information delivered through professional web sites. The more elite (and snobbish) of the early netizens decried this colonization of the electronic space by the mainstream media; they preferred the anarchic, imprecise and democratic community of newsgroups to the imperial aegis of Big Media.

In retrospect, both sides got it wrong. There was no replacement of anarchy for order; nor was there any centralization of attention around a suite of “portal” sites, though, for at least a decade it seemed precisely this was happening. Nevertheless, the swarm has a way of consistently surprising us, of finding its way out of any box drawn up around it. If, for a period of time, it suited the swarm to cozy up to the old and familiar, this was probably due more to habit than to any deep desire. When thrust into the hyper-connected realm of the Web, our natural first reaction is to seek signposts, handholds against the onrush of so much that clamors about its own significance. In cyberspace you can implicitly trust the BBC, but when it comes to The Smoking Gun or Disinformation, that trust must be earned. Still, once that trust has been won, there would be no going back. This is the essence of the process of media fragmentation. The engine that drives fragmentation is not increasing competition; it is increasing familiarization with the opportunities on offer.

We become familiar with online resources through “the Three Fs”. We find things, we filter them, we forward them along. Social networks evolve the media consumption patterns which suit themselves best; this is often not highly correlated with the content available from mainstream outlets. Over time, social networks tend to favor the obscure over the quotidian, as the obscure is the realm of the cognoscenti. This trend means that this fragmentation is both inevitable and bound to accelerate.

Fragmentation spreads the burden of expertise onto a swarm of nanoexperts. Anyone who is passionate, intelligent, and willing to make the attention investment to master the arcana of a particular area of inquiry can transform themselves into a nanoexpert. When a nanoexpert plugs into a social network that values this expertise (or is driven toward nanoexpertiese in order to raise their standing within an existing social network), this investment is rewarded, and the connection between nanoexpert and network is strongly reinforced. The nanoexpert becomes “structurally coupled” with the social network – for as long as they maintain that expertise against all competitors. This transformation is happening countless times each day, across the entire taxonomy of human expertise. This is the engine which has deprived the mainstream media of their position of authority.

While the net gave every appearance of centralization, it never allowed for a monopoly on distribution. That house was always built on sand. But the bastion of expertise, this took longer to disintegrate. Yet it has, buffeted by wave after wave of nanoexperts. With the rise of the nanoexpert, mainstream media have lost all of their “natural” advantages, yet they still have considerable economic, political and popular clout. We must examine how they could translate this evanescent power into something which can survive the transition to world of nanoexperts.

II.

While expertise has become a diffuse quality, located throughout the cloud of networked intelligence, the search for information has remained essentially unchanged for the past decade. Nearly everyone goes to Google (or a Google equivalent) as a first stop on a search for information. Google uses swarm intelligence to determine the “trust” value of an information source: the most “trusted” sites show up as the top hits on Google’s Page Rank. Thus, even though knowledge and understanding have become more widespread, the path toward them grows ever more concentrated. I still go to the New York Times for international news reporting, and the Sydney Morning Herald for local news. Why? These sources are familiar to me. I know what I’m going to get. That means a lot, because as the number of possible sources reaches toward infinity, I haven’t the time or the inclination to search out every possible source for news. I have come to trust the brand. In an era of infinite choice, a brand commands attention. Yet brands are being constantly eroded by the rise of the nanoexpert; the nanoexpert is persuaded by their own sensibility, not subject to the lure of a well-known brand. Although the brand may represent a powerful presence in the contemporary media environment, there is very little reason to believe this will be true a decade or even five years hence.

For this reason, branded media entities need to make an accommodation with the army of nanoexperts. They have no choice but to sue for peace. If these warring parties had nothing to offer one another, this would be a pointless enterprise. But each side has something impressive to offer up in a truce: the branded entities have readers, and the nanoexperts are constantly finding, filtering and forwarding things to be read. This would seem to be a perfect match, but for one paramount issue: editorial control. A branded media outlet asserts (with reason) that the editorial controls developed over a period of years (or, in the case of the Sydney Morning Herald, centuries) form the basis of a trust relationship with its audience. To disrupt or abandon those controls might do more than dilute the brand – they could quickly destroy it. No matter how authoritative a nanoexpert might be, all nanoexpert contributions represent an assault upon editorial control, because these works have been created outside of the systems of creative production which ensure a consistent, branded product. This is the major obstacle that must be overcome before nanoexperts and branding media can work together harmoniously.

If branded media refuse to accept the ascendancy of nanoexperts, they will find themselves entirely eroded by them. This argument represents the “nuclear option”, the put-the-fear-of-God-in-you representation of facts. It might seem completely reasonable to a nanoexpert, but appears entirely suspect to the branded media, seeing only increasing commercial concentration, not disintegration. For the most part, nanoexperts function outside systems of commerce; their currency is social standing. Nanoexpert economies of value are invisible to commercial entities; but that does not mean they don’t exist. If we convert to a currency of attention – again, considered highly suspect by branded media – we can represent the situation even more clearly: more and more of the audience’s attentions are absorbed by nanoexpert content. (This is particularly true of audiences under 25 years old, who have grown to maturity in the era of the Web.)

The point can not be made more plainly, nor would it do any good to soften the blow: this transition to nanoexpertise is inexorable – this is the ad-hoc behavior of the swarm of internet users. There’s only one question of any relevance: can this ad-hoc behavior be formalized? Can the systems of production of the branded media adapt themselves to an era of “peer production” by an army of nanoexperts? If branded media refuse to formalize these systems of peer production, the peer production communities will do so – and, in fact, many already have. Sites such as Slashdot, Boing Boing, and Federated Media Publishing have grown up around the idea that the nanoexpert community has more to offer microaudiences than any branded media outlet. Each of these sites gets millions of visitors, and while they may not match the hundreds of millions of visitors to the major media portals, what they lack in volume they make up for in their multiplicity; these are successful models, and they are being copied. The systems which support them are being replicated. The means of fragmentation are multiplying beyond any possibility of control.

III.

A branded media outlet can be thought of as a network of contributors, editors and publishers, organized around the goal of gaining and maintaining audience attention. The first step toward an incorporation of peer production into this network is simply to open the gates of contribution to the army of nanoexperts. However, just because the gates to the city are open does not mean the army will wander in. They must be drawn in, seduced by something on offer. As commercial entities, branded media can offer to translate the coin of attention into real currency. This is already their function, so they will need to make no change to their business models to accommodate this new set of production relationships.

In the era of networks, joining one network to another is as simple as establishing the appropriate connections and reinforcing these connections by an exchange of value which weights their connections appropriately. Content flows into the brand, while currency flows toward the nanoexperts. This transition is simple enough, once editorial concerns have been satisfied. The issues of editorial control are not trivial, nor should they be sublimated in the search for business opportunities; business have built their brand around an editorial voice, and should seek only to associate with those nanoexperts who understand and are responsive to that voice. Both sides will need to be flexible; the editorial voice must become broader without disintegrating into a common yowl, while the nanoexperts must put aside the preciousness which they have cultivated in search of their expertise. Both parties surrender something they consider innate in order to benefit from the new arrangement: that’s the real nature of this truce. It may be that some are unwilling to accommodate this new state of affairs: for the branded media, it means the death of a thousand cuts; for the nanoexpert it means they will remain confined to communities where they have immense status, but little else to show for it. In both cases, they will face the competition of these hybrid entities, and, against them neither group can hope to triumph. After a settling-out period, these hybrid beasts, drawing their DNA from the best of both worlds, will own the day.

What does this hybrid organization deliver? At the moment, branded media deliver a broad range of content to a broad audience, while nanoexperts deliver highly focused content to millions of microaudiences. How do these two pieces fit together? One of the “natural” advantages of branded media organizations springs from a decades-long investment in IT infrastructure, which has historically been used to distribute information to mass audiences. Yet, surprisingly, branded media organizations know very little about the individual members of their audience. This is precisely the inverse of the situation with the nanoexpert, who knows an enormous amount about the needs and tastes of the microaudience – that is, the social networks served by their expertise. Thus, there needs to be another form of information exchange between the branded media and the nanoexpert; it isn’t just the content which needs to be syndicated through the branded outlet, but the microaudiences themselves. This is not audience aggregation, but rather, an exploration in depth of the needs of each particular audience member. From this, working in concert, the army of nanoexperts and the branded media outlet can develop tools to deliver depth content to each audience member.

This methodology favors process over product; the relation between nanoexpert, branded media, and audience must necessarily co-evolve, working toward a harmony where each is providing depth information in order to improve the capabilities of the whole. (This is the essence of a network.) Audience members will assume a creative role in the creation of a “feed” which serves just themselves, and, in this sense, each audience member is a nanoexpert – expert in their own tastes.

The advantages of such a system, when put into operation, make it both possible and relatively easy to deliver commercial information of such a highly meaningful nature that it can no longer be called “advertising” in any classic sense of the word, but rather, will be considered a string of “opportunities.” These might include job offers, or investment opportunities, or experiences (travel & education), or the presentation of products. This is Google’s Ad Words refined to the utmost degree, and can only exist if all three parties to this venture – nanoexpert, branded media, and audience members – have fully invested the network with information that helps the network refine and deliver just what’s needed, just when it’s wanted. The revenue generated by a successful integration of commerce with this new model of syndication will more than fuel its efforts.

When successfully implemented, such a methodology would produce an enviable, and likely unassailable financial model, because we’re no longer talking about “reaching an audience”; instead, this hybrid media business is involved in millions of individual conversations, each of which evolves toward its own perfection. Individuals imbedded in this network – at any point in this network – would find it difficult to leave it, or even resist it. This is more than the daily news, better than the best newspaper or magazine ever published; it is individual, and personal, yet networked and global. This is the emerging model for factual publishing.