For the past three hundred years, the relationship between the press and the state has been straightforward: the press tries to publish, the state uses its various mechanisms to thwart those efforts. This has produced a cat-and-mouse steady-state, a balance where selection pressures kept the press tamed and the state – in many circumstances – somewhat accountable to the governed. There are, as always, exceptions.
In the last few months, the press has become hyperconnected, using that hyperconnectivity to pierce the veil of secrecy which surrounds the state; using the means available to it to hyperdistribute those secrets. The press has become hyperempowered, an actor unlike anything ever experienced before.
Wikileaks is the press, but not the press as we have known it. This is the press of the 21st century, the press that comes after we’re all connected. Suddenly, all of the friendliest computers have become the deadliest weapons, and we are fenced in, encircled by threats – which are also opportunities.
This threat is two sided, Janus-faced. The state finds its ability to maintain the smooth functioning of power short-circuited by the exposure of its secrets. That is a fundamental, existential threat. In the same moment, the press recognizes that its ability to act has been constrained at every point: servers get shut down, domain names fail to resolve, bank accounts freeze. These are the new selection pressures on both sides, a sudden quickening of culture’s two-step. And, of course, it does not end there.
The state has now realized the full cost of digitization, the price of bits. Just as the recording industry learned a decade ago, it will now have to function within an ecology which – like it or not – has an absolutely fluid quality. Information flow is corrosive to institutions, whether that’s a record label or a state ministry. To function in a hyperconnected world, states must hyperconnect, but every point of connection becomes a gap through which the state’s power leaks away.
Meanwhile, the press has come up against the ugly reality of its own vulnerability. It finds itself situated within an entirely commercial ecology, all the way down to the wires used to carry its signals. If there’s anything the last week has taught us, it’s that the ability of the press to act must never be contingent upon the power of the state, or any organization dependent upon the good graces of the state.
Both sides are trapped, each with a knife to the other’s throat. Is there a way to back down from this DEFCON 1-like threat level? The new press can not be wished out of existence. Even if the Internet disappeared tomorrow, what we have already learned about how to communicate with one another will never be forgotten. It’s that shared social learning – hypermimesis – which presents the continued existential threat to the state. The state is now furiously trying to develop a response in kind, with a growing awareness that any response which extends its own connectivity must necessarily drain it of power.
There is already a movement underway within the state to shut down the holes, close the gaps, and carry on as before. But to the degree the state disconnects, it drifts away from synchronization with the real. The only tenable possibility is a ‘forward escape’, an embrace of that which seems destined to destroy it. This new form of state power – ‘hyperdemocracy’ – will be diffuse, decentralized, and ubiquitous: darknet as a model for governance.
In the interregnum, the press must reinvent its technological base as comprehensively as Gutenberg or Berners-Lee. Just as the legal strangulation of Napster laid the groundwork for Gnutella, every point of failure revealed in the state attack against Wikileaks creates a blueprint for the press which can succeed where it failed. We need networks that lie outside of and perhaps even in opposition to commercial interest, beyond the reach of the state. We need resilient Internet services which can not be arbitrarily revoked. We need a transaction system that is invisible, instantaneous and convertible upon demand. Our freedom madates it.
Some will argue that these represent the perfect toolkit for terrorism, for lawlessness and anarchy. Some are willing to sacrifice liberty for security, ending with neither. Although nostalgic and tempting, this argument will not hold against the tenor of these times. These systems will be invented and hyperdistributed even if the state attempts to enforce a tighter grip over its networks. Julian Assange, the most famous man in the world, has become the poster boy, the Che for a networked generation. Script kiddies everywhere now have a role model. Like it or not, they will create these systems, they will share what they’ve learned, they will build the apparatus that makes the state as we have known it increasingly ineffectual and irrelevant. Nothing can be done about that. This has already happened.
We face a choice. This is the fork, in both the old and new senses of the word. The culture we grew up with has suddenly shown its age, its incapacity, its inflexibility. That’s scary, because there is nothing yet to replace it. That job is left to us. We can see what has broken, and how it should be fixed. We can build new systems of human relations which depend not on secrecy but on connectivity. We can share knowledge to develop the blueprint for our hyperconnected, hyperempowered future. A week ago such an act would have been bootless utopianism. Now it’s just facing facts.
Back in 1978 – when I was just fifteen – I begged my parents to let me enroll in a course at the local community college (the equivalent of TAFE) so that I could take ‘Data Processing with RPG II’. I wrote my first computer program in RPG II. I typed that program onto a series of punched cards, one statement per punched card. Once I’d completed typing the deck of cards which comprised my program, I dropped them off at the college’s data processing center, where they went into the batch queue. You returned in 24 hours and were returned your deck of punched cards, along with a long string of ‘green-bar’ paper, which printed the results (or errors) in your program. If you’d made a mistake on one of the cards – a spelling error, or a syntactical no-no, you’d be forced to repeat the process, as needed, until you got it right.
Woohoo. Sign me up.
From around 1980 – when I went off to MIT to study computer science – computers have been my constant companions. I’ve owned cheap ones (Commodore’s VIC-20), expensive ones (one of the first Macintosh IIs to roll off the assembly line), tiny ones (iPhone), and big ones (SparcStation 3). I have never owned a computer that I have not written code for. In my mind, the computer and the act of programming are inseparable.
Programming languages are something one acquires, like computers; but you don’t put those languages in the bin – mostly. In preparation for this talk, I made up a list of all the programming languages I’ve learned over the years, beginning with RPG II – which I’ve since forgotten. BASIC came next, and I thought it a wonderful, useful, incredible language, my true starting point.
I spent many years programming in assembly language on a variety of systems – CP/M, MS-DOS, embedded microcontrollers. I bought a cheap C compiler in 1982, a copy of Kernighan & Ritchie, learned pointer arithmetic, and crashed my computer repeatedly in the process. Now that was fun.
I did take up C++ when it was still new, when Stroustrup was still implementing features of the language. (Oh, wait, he’s still doing that, isn’t he?) Buried myself in class designs and object hierarchies and delegation models. I can probably still program in C++. If someone were to threaten me with a taser.
In the 1990s along came the Web and LINUX, the open computing platform. Suddenly a language was more useful for its ability to communicate with other entities than for its raw processing power.
I sat down at the 3rd International World Wide Web conference with a few folks from SUN Microsystems, who were touting this new, portable programming language they’d invented, which they called ‘Oak‘. I wonder whatever became of that?
Just a few years ago I decided that I needed to learn Python. I don’t remember the reason. I don’t even know that there was a reason. Python was there, and that was enough.
It didn’t take long to learn – Python isn’t a difficult language – but for just that little bit of learning I got so much power, well – I don’t have to explain it to you. You understand. It’s a bit like crack, Python is. Once you’ve had that first hit, you’re never quite the same again.
I put Python on everything: on my Macs, on my servers, on my mobile – everything I owned got a Python install. I didn’t know exactly what I’d do with all this Python, but somehow that seemed unimportant. Just get it everywhere. You’ll figure something out.
In some ways discovering Python was very frustrating. By my early 40s I’d basically stopped programming; not because I hated coding, but because my life had turned in other directions. I teach, I research, I lecture, I write, I do a little TV on the side. None of that has anything to do with coding. I had the best tool for a grand bit of hackery, and no time to do anything with it, nor any real reason to drive me to make time.
My biggest Python project (before last week) was a simple script to create a video used in the opening of my 2008 WebDirections South keynote. I wanted to show the ‘cloud’ of Twitter followers I had started to accumulate – around 1500. Not just a ‘wall’ of different faces, but a film, an animation, where each person I followed on Twitter had their moment in the sun. The script retrieved the list of people I follow, then iterated through this list, getting profile information for each individual, extracting from that the URL for the user’s avatar, which it then retrieved, Using Python Imaging Library, it then embossed the user’s handle onto the image. After that it was a basic drag-and-drop operation into Adobe Premiere. Presto! – I had a movie. Thank you, Python.
For half a decade I’ve been thinking about social networks. This little film project allowed me to tie my research together with my desire to have a pleasant excuse to hack. When I sat back and watched the film I’d algorithmically pieced together, I began to get a deeper sense of the value of my ‘social graph’. That’s a new phrase, and it means the set of human relationships we each carry with us. Until just a few years ago, these relationships lived wholly between our ears; we might augment our memories with an address book or a Rolodex, but these paper trails were only ever a reflection of our embodied relationships. Ever since Friendster, these relationships have exteriorized, leaped out of our heads (like Athena from Zeus) and crawled into our computers.
This makes them both intimately familiar and eerily pluripotent. We are wired from birth to connect with one another: to share what we know, to listen to what others say. This is what we do, a knowledge so essential, so foundational, it never needs to be taught. When this essential feature of being human gets accelerated by the speed of the computer, then amplified by a global network that now connects about five billion people (counting both mobile or Internet), all sorts of unexpected things begin to happen. The entire landscape of human knowledge – how we come to know something, how we come to share what we know – has been utterly transformed over the last decade. Were we to find a convenient TARDIS and take ourselves back to the world of 1999, it would be almost unrecognizable. The media landscape was as it always had been, though the print component had hesitatingly migrated onto the Web. To learn about the world around us, we all looked up – to the ABC, to the New York Times, to the BBC World Service.
Then the world exploded.
We don’t look up anymore. We look around – we look to one another – to learn what’s going on. Sometimes we share what we hear on the ABC or the Times or the World Service. But what’s important is that we share it. There is no up, there is no centre. There is only a vast sea of hyperconnected human nodes.
The most alluring and seductive of all of the hyperconnecting services is unquestionably Facebook. In three years it has grown from just fifteen million to nearly half a billion users. It might be the most visited website in the world, just now surpassing Google. Facebook has become the nexus, the connecting point for one person in every fourteen on Earth. Facebook is the place where the social graph has come to life, where the potency of sharing and listening can be explored in depth. But it is a life lived out in public. Facebook is not really geared toward privacy, toward the intimacies that we expect as a necessary quality of our embodied relationships. Facebook founder Mark Zuckerberg is on the record talking about ‘the end of privacy’, and how he sees it that a side-effect of Facebook’s mission ‘to give people the power to share, and make the world more open and connected’.
A world more open could be a good thing, but only if the openness is wholly multilateral. We don’t want to end up in a world where our secrets as individuals have been revealed, while those who have the concentrations of capital and power, and their supporting organizations and networks, manage to continue to remain obscure and occult. This kind of ‘privacy asymmetry’ will only work against the individuals who have surrendered their privacy.
This is precisely where we seem to be headed. Facebook wants us to connect and share and reveal, but – particularly around privacy, user confidentiality, and the way they put that vast amount of user-generated data to work for themselves and their advertisers – Facebook’s business practices are entirely opaque. Openness must be met with openness, sharing with sharing. Anything else creates a situation where one side is – quite literally – holding all the cards.
I have been pondering the power of social networks for six years, so I am peculiarly conscious of the price you pay for participation in someone else’s network. I’ve come to realized your social graph is your most important possession. In a very real way, your social graph is who you are. Until a few years ago we never gave this much thought because we carried our graphs with us everywhere, inside our heads. But now that these graph live elsewhere – under the control of someone else – we’re confronted with a dilemma :we want to turbocharge our social graphs, but we don’t want anyone else having any access to something so fundamental and intimate. If the CIA and NSA use social graphs to find and combat terrorists, if smoking, obesity and divorce spread through social graphs, why would we hand something so personal and so potent to anyone else? What kind of value would we receive for surrendering our crown jewels?
By the end of last month it was clear that Facebook had become dangerous. Something had to be done. People had to be warned. In a Melbourne hotel room, I drafted a manifesto. Here’s how I closed it:
There is only one solution. We must take the thing which is inalienable from us – our presence – and remove it from those who would use that presence for their own gain. We must move, migrate, become digital refugees, fleeing a regime which seeks only its own best interests, to the detriment of our own… We may be the first, but we will not be the last. We must map the harbors, clear the woods, and make virgin lands inviting enough that it will be an easy decision for those who will come to join us in this new country, where freedom goes hand-in-hand with presence, where privacy is not a dirty word, and where the future knows no bounds.
So I quit. But I didn’t do it suddenly or rashly. I’d been using Facebook to share media – links and articles and videos – so I set up a Posterous account, where I could do exactly the same kind of sharing. Over the course of two weeks, I posted a series of Facebook updates, telling everyone in my social graph that I’d be quitting Facebook – beginning by posting that manifesto – and giving them the link to my Posterous account. I did this on five separate occasions in the week leading up to my account deletion.
The responses were interesting. Most of the folks in my social graph who bothered to respond were in various stages of mourning. My own aunt – whom I’ve been corresponding with via email for twenty years – wrote how much she’d miss me. Another individual expressed regret at my leave-taking, given that we’d only just reconnected after many years. “But,” I responded, “I’ve shown you how we can stay in touch. Just follow the link.” “That’s too hard,” he replied, “I like that Facebook gives me everyone in one place. I don’t have to remember to check here for you, or over there for someone else. This is just easy.”
I can’t fault his logic: Facebook is just like the comfy chair. It’s a pleasant place to be – even when surrounded by Inquisitors. Facebook users are simply so grateful that such an amazing service is on offer – seemingly for free – that they haven’t thought through the price of their participation. And unless something else comes along that’s as powerful and easy as Facebook, things will go on just as are. Unless a disruptive innovation upends all the apple carts.
This is when I had a brainwave.
II: And Now For Something Completely Different
What is the social graph? At its essence, it is a set of connections, connections which define certain flows of information. These connections are both figurative and literal. If I say that I am connected to someone, I mean that we have some sort of relationship. But I also means that we have established protocols for communication, channels that can be used to send messages back and forth. For the last three hundred years this has been embodied in the ‘visiting card’, presented at all occasions when there is an invitation to connect. The ‘visiting card’ evolved into the ‘business card’ we share freely and promiscuously when there’s money to be made, or a connection to be had. The business card of 2010 must provide four significant pieces of information: a) the name of the caller; b) the address of the caller; c) the telephone number(s) of the caller; and d) the email address of the caller. Other information can be provided on the card – and often is – but if a card is missing any of these four essentials, it is incomplete. Each item represents a separate sphere of connectivity: the name is the necessary prerequisite for social connectivity; the address for postal connectivity; the telephone number and email addresses are self-explanatory. Each entry has a one-to-one correspondence with some form of connectivity. When we exchange business cards, we are providing the information necessary to establish connectivity.
We now have digital versions of the business card; we hand out vCards, or provide QR Codes that can be scanned and translated into a pointer to a vCard. Yet what we do with these digital versions of the business card not has changed: we stuff them into ‘address books’, or into the contact lists on our mobiles. If we have the right tools, we can upload them to Plaxo or LinkedIn. There they sit, static and essentially useless. A database with no applications.
That’s kind of weird, isn’t it? I mean, here we are, each of us walking around with a few hundred contacts on our mobiles, and essentially doing nothing with them unless we need to make a phone call or send an email. It doesn’t make sense. Somehow we’ve lost sight of the fact that the digital item is active in a way the physical object is not. Facebook understands this. Facebook takes your ‘calling card’ – the profile that you loaded up with your personal information – and makes it the foundation of your social graph. Everyone connects to your profile (which is you), and these connections become the cornerstone of fully bilateral sharing relationships. Anyone connected to you can send you a message, or initiate a chat, or look at the photos you uploaded of your holiday in the fleshpots of Bangkok. That one connection becomes the cornerstone for a whole range of opportunities to share media – text, images, video, links, music, events, etc. – and equally an opportunity to listen to what others are sharing. That’s what Facebook is, really, a giant, centralized switchboard which connects its members to one another. That’s all any social network is.
It’s easy – really easy – to connect together. We have so many ways to do so, through so many mechanisms, that really we’re drowning in choice, rather than a poverty of options. Instead of a monolithic solution, the Internet, like nature, tends to favor diversity and heterogeneity. Diversity creates the space for play and exploration; a tolerance for heterogeneity allows that there is no right answer, no one way to play the game. Is it possible to design an architecture for human connectivity which favors diversity and heterogeneity.
For the past few weeks those of you following me on Twitter have seen me tweet about ‘Project Thunderware’, which was the silliest code-name I could think up for a project that is actually entirely serious. The real name is Plexus. Plexus is design for a second-generation social network. It is personal – everyone runs their own Plexus. It is portable – written entirely in Python so you can drop it onto a USB key (if you want), and take it with you anywhere you can get Python running. It is private – no one else has access to your Plexus, unless you want them to. It’s completely open and completely modular. Plexus is designed to take the passive social graph we’ve all got tucked away in our various devices, translating it into something active, vital, and essential.
There are three components within Plexus. First and most important is the social graph, a database of connections known as the ‘Plex’. Each of these connections, like a business card, comes with a list of connection points. These connection points can be outgoing – ‘this is how I will speak to you’, or incoming – ‘this is how I will listen to you’. They can be unilateral or bilateral. They can be based on standard protocols – such as SMTP or XMPP, or the APIs of the rapidly-multiplying set of social services already available in the wilds of the Internet, or they can be something entirely home-grown and home-brewed. They can be wide open, or encrypted with GPG. Everything is negotiable. That’s the point: something’s in the Plex because there’s an active connection and relationship between two parties.
The Plex is only a database. To bring that database to life, two other components are required. The first of these is the ‘Sharer’. The Sharer, as the name implies, makes sure that something to be shared – be it a string of text, or a link, or a video, or a blog post, or whatever – ends up going out over the negotiated channels. The Sharer is built out of a set of Python modules, with each particular sharing service handled by its own module. This means that there is no limit or artificial constraint on what kinds of services Plexus can share with.
Conversely, the third component, the Listener, monitors all of the negotiated channels for any activity by any of the connections in the Plex. When the Listener hears something, it sends that to the user – to be displayed or saved or ignored according to the needs of the moment. Like the Sharer, the Listener is also a set of Python modules, with each monitored service handled by its own module. The Listener should be able to listen to anything that has a clearly defined interface.
When Plexus starts up, it reads through the Plex, instancing the appropriate Sharer and Listener objects on a connection-by-connection basis. Everything after initialization is event-driven: the Plexus user shares something, or the Listener hears something and offers that to the Plexus user.
That’s it. That’s the whole of the design. As always, the devil is in the details, but the essential architecture will probably remain unchanged. Plexus creates your own, self-managed social network, both entirely self-contained, and also acts as a connected node within a broader network. Because Plexus functions as plumbing – wiring together social services that haven’t been designed to talk to one another – it performs a service that is badly needed, filling a growing void. Plexus is your own plumbing, under your own control.
Let’s talk through a use case. I give a lot of lectures, and I make sure to put my contact details – email, blog and Twitter – on my slides. I meet two people at a lecture – we’ll call one of them Nick, and the other one Anthony. (Those names just came to me.) Nick is an affable person, he just wants to be able to follow all of my output, as I put it out. He needs are a list of the dozen-or-so public contact points where I present myself. That’d be my name, the six or seven blogs I write, my Twitter feed, my Posterous, my YouTube account, and Viddler account, and so forth. He gets that nugget of data off of markpesce.com/markpesce.plx – it’s basically a nice little bit of JSON (I don’t care for XML, but you can microformat to your heart’s content) that he can drop directly into Plexus, where it will go into the Plex. As the Plex digests it, this nugget instances the necessary Listeners. Now, whenever I say anything – anywhere – Nick knows about it. Which makes Nick happy.
Anthony is a different story. He’s a l33t user, and doesn’t want to be forced to rub shoulders with the hoi polloi at any of the normal social web services. Instead, Anthony wants to get a personally-addressed email from me every time I have something to share. Apparently he’s developed some excellent email filtering and management tools, so that even if I get quite chatty, it won’t clog up his inbox. So, he negotiates with me – Plexus-to-Plexus – and goes into my Plex as a contact, so that when I instance my Sharers, one is specifically set up to send him anything I share via SMTP. He doesn’t have to do anything to his Plexus, because he’s not using his Plexus to listen to me.
Use cases are all the more meaningful when they’re backed up by working code. Hence, I went back to the code mines last weekend – with a spring in my step and a song in my heart – and created a very, very embryonic version of Plexus. In just a little over two days, I created Sharer modules for Twitter, Posterous, Tumblr and SMTP, and Listener modules for Twitter and RSS. I reckoned that would be sufficient for the purposes of a demonstration – though if I’d had more time I could easily have wired in a few hundred other web social services.
There you go. That’s Plexus. The project is open source – after all, why would you trust a social network when you can’t inspect the code?
III: How Not To Be Seen
Plexus is grass-roots, bottom-up, and radically decentralized. That means the big boys will probably try to ignore it. Social media isn’t about the people, after all. It’s about humungous accumulations of capital going hand-in-hand with impossibly large collections of data, and, somewhere in the background, all the spooks, reading the paper trail. Social media is an instrument of control, the latest and the greatest. Sit still, read your feed, and comply.
But what if we refuse to comply? Is that even an option? Is it possible to be disconnected and influential? That’s the Faustian bargain being offered to us: join with the collective and you will be heard. And managed. And herded. Or suit yourself, and weep and gnash your teeth in the outer darkness. But in that Interzone, outside the smooth functioning of power, what happens when we connect there?
Reflect back on March of 2000. Napster, the centralized filesharing network, had recently be shut down by court order. A different crew created a decentralized filesharing tool, known as Gnutella, releasing both the tool and the source code to the world on March 14th. When AOL/TimeWarner – parent company of the folks who wrote Gnutella – found out about and put a stop to the source code release, it was too late. It couldn’t be recalled. The bomb couldn’t be un-invented. The music industry is more authentic than it was a decade ago, more open to innovation, to outsiders, to diversity and heterogenetity. All because a few hackers decided to change the way people share their music.
History never repeats, but it does rhyme. We share everything now; we worry that we overshare. Now it’s time to take our sharing to the next level. We need a social2.0, something that reflects what we’ve learned in the past half-dozen years. That’s not just a slew of new services. That’s an attitude change. Consider: the wiki was invented in 1995. It’s Precambrian web tech. But we didn’t start using wikis until after 2001, when Wikipedia began to take off. Why? It took us a while – and a lot of interactions – to understand how to use the tools on offer. Social technology is uniquely potent – so much so that we’ll be learning its strengths and weakness for a decade or more. The time has come to step out, seize the means of communication, and make them our own.
I reckon you can now understand why Python was such an obvious choice for Plexus. In no other language, with no other community, is the idea of sharing so much at the core. There is a Python module or code sample to do nearly every task under the sun, precisely because sharing is a core ethic of the Python community. Python is the language of the Web because it lends itself to the same sharing that the Web fosters. Python is the language of Plexus because Plexus needs to inherit all of Python’s best qualities, needs to be straightforward and open and flexible and extensible and easily shared. I need to be able to drop a Plexus module into an email and know, at the other end, that it will just work. ‘Take this,’ I’ll say, ‘and feed it to your Plexus.’ You’ll do that, and suddenly you’ll find that we have a secure, obscure and nearly invisible means of sharing – a darknet, how not to be seen – that can be as private and personal or open and public as we agree it should be. And you can turn around, think up something else, and mail that to me, or to someone else, or to the world.
The social web must be a social project, an opportunity to embody exactly what we’re trying to create as we are creating it. It’s the ultimate dogfooding. Success requires a willing surrender that rejoices in cooperation.
So here it is. This is the best I can do. It may be the best that I will ever do. I place it before you this morning, a humble offering, written in a language that I barely know, but which I’ve used to express my highest aspirations. Plexus is naked, newborn, and needs help. It will only benefit from your input, comments, recommendations, pointers and critiques. It is an idea that can only grow and mature as it is shared. That’s what this is all about. It always has been.
The slides for this presentation can be found here.
We live in the age of networks. Wherever we are, five billion of us are continuously and ubiquitously connected. That’s everyone over the age of twelve who earns more than about two dollars a day. The network has us all plugged into it. Yet this is only the more recent, and more explicit network. Networks are far older than this most modern incarnation; they are the foundation of how we think. That’s true at the most concrete level: our nervous system is a vast neural network. It’s also true at a more abstract level: our thinking is a network of connections and associations. This is necessarily reflected in the way we write.
I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982. Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts. Like it or not, we implicitly reference other texts with every word we write. It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts. It’s the secret to our success. Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth. He never got it. But Nelson did influence a generation of hackers – Sir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.
As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected. While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure. Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone. Do we answer? Do we click and follow? A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost. The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space. The more heavily linked a particular hypertext document is, the greater this pressure.
Consider two different documents that might be served up in a Web browser. One of them is an article from the New York Times Magazine. It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links. Many of these links point back to otherNew York Timesarticles. This article stands alone. It is a hyperdocument, but it has not embraced the capabilities of the medium. It has not been seduced. It is a spinster, of sorts, confident in its purity and haughty in its isolation. This article is hardly alone. Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ. We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized. Every link presents an escape route, and a potential loss of income. Hence, links are kept to a minimum, the losses staunched. Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium. The tone has been set.
On the other hand, consider an average article in Wikipedia. It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links. Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web. This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage. Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention. Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.
Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug. Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly. If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site. In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads. In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it. This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden. It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.
This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents. It is most obvious in the way we now absorb news. Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper. Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on. We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way. The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole. Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney MorningHeraldand Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.
The newspaper as we have known it has been shredded. This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext. We are the ones who feel the lure of the link; no machine can do that. Newspapers made the brave decision to situate themselves as islands within a sea of hypertext. Though they might believe themselves singular, they are not the only islands in the sea. And we all have boats. That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.
The lure of the link has a two-fold effect on our behavior. With its centrifugal force, it is constantly pulling us away from wherever we are. It also presents us with an opportunity cost. When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment. If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it. Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this? Does my need and interest outweigh all of the other demands upon my attention? Can I focus?
In most circumstances, we will decline the challenge. Whatever it is, it is not salient enough, not alluring enough. It is not so much that we fear commitment as we feel the pressing weight of our other commitments. We have other places to spend our limited attention. This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”. It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.
The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span. Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days. Instead, attention has entered an era of hypercompetitive development. Twenty years ago only a few media clamored for our attention. Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention. Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.
The most obvious effect of this hypercompetitive development of attention is the shortening of the text. Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment. More and more, our diet of text comes in these ‘bite-sized’ chunks. Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything. The truth is more complex. Our diet will continue to consist of a mixture of short and long-form texts. In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form. It is digestible. But it need not be vacuous. Countlessspecialtyblogs deliver highly-concentrated texts to audiences who need no introduction to the subject material. They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit. Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel: shorter the text, the less invested you are. You give way more easily to centrifugal force. You are more likely to navigate away.
There is a cost incurred both for substance and the lack thereof. Such are the dilemmas of hypertext.
II: Schwarzschild Radius
It appears inarguable that 2010 is the Year of the Electronic Book. The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market. Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package. Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world. The electronic book is an inevitability.
At this point a question needs to be asked: what’s so electronic about an electronic book? If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar. Too familiar. This is not an electronic book. This is ‘publishing in light’. I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning. An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters. It doesn’t even necessarily begin with that translation. Instead, first consider the text qua text. What is it? Who is it speaking to? What is it speaking about?
These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light. That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated. They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium. This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader. Nor does it make the electronic book an intrinsically alluring object. That’s an interesting point to consider, because hypertext is intrinsically alluring. The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text. If an electronic book does not offer a new relationship to the text, then what precisely is the point? Portability? Ubiquity? These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring. This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring. At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.
Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium. But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe. Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,. For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.
What does the electronic book look like? Does it differ at all from the hyperdocuments we are familiar with today? In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text. All of these are immediately applicable to the electronic book. The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored. The printed volume took nearly fifty years to evolve into its familiar hand-sized editions. Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book. We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been. Over the next few years, our innovations will surprise us. We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.
The electronic book will not be immune from the centrifugal force which is inherent to the medium. Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument. Yet we come to books with a sense of commitment. We want to finish them. But what, exactly do we want to finish? The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does. So does an electronic book have a beginning and an end? Or is it simply a densely clustered set of texts with a well-defined path traversing them? From the vantage point of 2010 this may seem like a faintly ridiculous question. I doubt that will be the case in 2020, when perhaps half of our new books are electronic books. The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book. There is no way that the electronic book can remain apart, indifferent and pure. It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader. More of a gradient than a boundary.
It remains unclear how any such construction can constitute an economically successful entity. Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe. The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen. This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear. Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.
We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books. (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole. Once you’re on the wrong side you’re doomed to fall all the way in.) On one side – our side – things look much as they do today. Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical. On the other side, electronic books rapidly become almost completely unrecognizable. It’s not just the financial model which disintegrates. As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform. The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words. Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each. The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could. Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both. This will completely confound our expectations of linearity in the text.
We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers. We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on. This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it. Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once. Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it. With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.
What ever happened to the book? It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever. This is not an ending, any more than birth is an ending. But it is a transition, at least as profound and comprehensive as the invention of moveable type. It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book. Transitions are chaotic, but they are also fecund. The seeds of the new grow in the humus of the old. (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)
III: Finnegans Wiki
So what of Aristotle? What does this mean for the narrative? It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts. But what about stories? From time out of mind we have listened to stories told by the campfire. The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale. For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.
Will we lose all of this? Can narratives stand up against the centrifugal forces of hypertext? Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form. The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind. There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot. A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force. We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum. It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.
This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books. Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet. How can any text hope to stand against that?
And yet, some do. Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series. Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination. None of this is high literature, but it is literature capable of resisting all our alluring distractions. This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc. We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad. That is one direction, a direction literary publishers will pursue, because that’s where the money lies.
There are two other paths open for literature, nearly diametrically opposed. The first was taken by JRR Tolkien in The Lord of the Rings. Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating. Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it. And although readers do finish the book, in a very real sense they do not leave that universe. The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations. Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books. Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy. Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers. This is another direction for the book. While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it. (Some argue that this is the secret of JK Rowling’s success.)
Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext. There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures. But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas. The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext. That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power. It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite. The text is overloaded with meaning, so much so that the mind can’t take it all in. Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant. But there is another possibility. In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed. In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example. As texts become electronic, as they melt and dissolve and link together densely, meaning multiplies exponentially. Every sentence, and every word in every sentence, can send you flying in almost any direction. The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.
It has been said that all of human culture could be reconstituted from Finnegans Wake. As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture. Everything will be there, all strung together. And that’s what happened to the book.
We live in a time of wonders, and, more often than not, remain oblivious to them until they fail catastrophically. On the 19th of October, 1999 we saw such a failure. After years of preparation, on that day the web-accessible version of Encyclopedia Britannica went on-line. The online version of Britannica contained the complete, unexpurgated content of the many-volume print edition, and it was freely available, at no cost to its users.
I was not the only person who dropped by on the 19th to sample Britannica’s wares. Several million others joined me – all at once. The Encyclopedia’s few servers suddenly succumbed to the overload of traffic – the servers crashed, the network connections crashed, everything crashed. When the folks at Britannica conducted a forensic analysis of the failure, they learned something shocking: the site had crashed because, within its first hours, it had attracted nearly fifty million visitors.
The Web had never seen anything like that before. Yes, there were search engines such as Yahoo! and AltaVista (and even Google), but destination websites never attracted that kind of traffic. Britannica, it seemed, had tapped into a long-standing desire for high-quality factual information. As the gold-standard reference work in the English language, Britannica needed no advertising to bring traffic to its web servers – all it need do was open its doors. Suddenly, everyone doing research, or writing a paper, or just plain interested in learning more about something tried to force themselves through Britannica’s too narrow doorway.
Encyclopedia Britannica ordered some more servers, and installed a bigger pipe to the Internet, and within a few weeks was back in business. Immediately Britannica became one of the most-trafficked sites on the Web, as people came through in search of factual certainty. Yet for all of that traffic, Britannica somehow managed to lose money.
The specifics of this elude my understanding. The economics of the Web are very simple: eyeballs equals money. The more eyeballs you have, the more money you earn. That’s as true for Google as for Britannica. Yet, somehow, despite having one of the busiest websites in the world, Britannica lost money. For that reason, just a few month after it freely opened its doors to the public, Britannica hid itself behind a “paywall”, asking seven dollars a month as a fee to access its inner riches. Immediately, traffic to Britannica dropped to perhaps a hundredth of its former numbers. Britannica did not convert many of its visitors to paying customers: there may be a strong desire for factual information, but even so, most people did not consider it worth paying for. Instead, individuals continued to search for a freely available, high quality source of factual information.
Into this vacuum Wikipedia was born. The encyclopedia that anyone can edit has always been freely available, and, because of its use of the Creative Commons license, can be freely copied. Wikipedia was the modern birth of “crowdsourcing”, the idea that vast numbers of anonymous individuals can labor together (at a distance) on a common project. Wikipedia’s openness in every respect – transparent edits, transparent governance, transparent goals – encouraged participation. People were invited to come by and sample the high-quality factual information on offer – and were encouraged to leave their own offerings. The high-quality facts encouraged visitors; some visitors would leave their own contributions, high-quality facts which would encourage more visitors, and so, in a “virtuous cycle”, Wikipedia grew as large as, then far larger than Encyclopedia Britannica.
Today, we don’t even give a thought to Britannica. It may be the gold-standard reference work in the English language, but no one cares. Wikipedia is good enough, accurate enough (although Wikipedia was never intended to be a competitor to Britannica by 2005 Nature was doing comparative testing of article accuracy) and is much more widely available. Britannica has had its market eaten up by Wikipedia, a market it dominated for two hundred years. It wasn’t the server crash that doomed Britannica; when the business minds at Britannica tried to crash through into profitability, that’s when they crashed into the paywall they themselves established. Watch carefully: over the next decade we’ll see the somewhat drawn out death of Britannica as it becomes ever less relevant in a Wikipedia-dominated landscape.
Just a few weeks ago, the European Union launched a new website, Europeana. Europeana is a repository, a collection of cultural heritage of Europe, made freely available to everyone in the world via the Web. From Descartes to Darwin to Debussy, Europeana hopes to become the online cultural showcase of European thought.
The creators of Europeana scoured Europe’s cultural institutions for items to be digitized and placed within its own collection. Many of these institutions resisted their requests – they didn’t see any demand for these items coming from online communities. As it turns out, these institutions couldn’t have been more wrong. Europeana launched on the 20th of November, and, like Britannica before it, almost immediately crashed. The servers overloaded as visitors from throughout the EU came in to look at the collection. Europeana has been taken offline for a few months, as the EU buys more servers and fatter pipes to connect it all to the Internet. Sometime late in 2008 it will relaunch, and, if its brief popularity is any indication, we can expect Europeana to become another important online resource, like Wikipedia.
All three of these examples prove that there is an almost insatiable interest in factual information made available online, whether the dry articles of Wikipedia or the more bouncy cultural artifacts of Europeana. It’s also clear that arbitrarily restricting access to factual information simply directs the flow around the institution restricting access. Britannica could be earning over a hundred million dollars a year from advertising revenue – that’s what it is projected that Wikipedia could earn, just from banner advertisements, if it ever accepted advertising. But Britannica chose to lock itself away from its audience. That is the one unpardonable sin in the network era: under no circumstances do you take yourself off the network. We all have to sink or swim, crash through or crash, in this common sea of openness.
I only hope that the European museums who have donated works to Europeana don’t suddenly grow possessive when the true popularity of their works becomes a proven fact. That will be messy, and will only hurt the institutions. Perhaps they’ll heed the lesson of Britannica; but it seems as though many of our institutions are mired in older ways of thinking, where selfishness and protecting the collection are seen as a cardinal virtues. There’s a new logic operating: the more something is shared, the more valuable it becomes.
II: The Universal Library
Just a few weeks ago, Google took this idea to new heights. In a landmark settlement of a long-running copyright dispute with book publishers in the United States, Google agreed to pay a license fee to those publishers for their copyrights – even for books out of print. In return, the publishers are allowing Google to index, search and display all of the books they hold under copyright. Google already provides the full text of many books which have an expired copyright – their efforts scanning whole libraries at Harvard and Stanford has given Google access to many such texts. Each of these texts is indexed and searchable – just as with the books under copyright, but, in this case, the full text is available through Google’s book reader tool. For works under copyright but out-of-print, Google is now acting as the sales agent, translating document searches into book sales for the publishers, who may now see huge “long tail” revenues generated from their catalogues.
Since Google is available from every computer connected to the Internet (given that it is available on most mobile handsets, it’s available to nearly every one of the four billion mobile subscribers on the planet), this new library – at least seven million volumes – has become available everywhere. The library has become coextensive with the Internet.
This was an early dream both of the pioneers of the personal computing, and, later, of the Web. When CD-ROM was introduced, twenty years ago, it was hailed as the “new papyrus,” capable of storing vast amounts of information in a richly hyperlinked format. As the limits of CD-ROM became apparent, the Web became the repository of the hopes of all the archivists and bibliophiles who dreamed of a new Library of Alexandria, a universal library with every text in every tongue freely available to all.
We have now gotten as close to that ideal as copyright law will allow; everything is becoming available, though perhaps not as freely as a librarian might like. (For libraries, Google has established subscription-based fees for access to books covered by copyright.) Within another few years, every book within arm’s length of Google (and Google has many, many arms) will be scanned, indexed and accessible through books.google.com. This library can be brought to bear everywhere anyone sits down before a networked screen. This library can serve billions, simultaneously, yet never exhaust its supply of texts.
What does this mean for the library as we have known it? Has Google suddenly obsolesced the idea of a library as a building stuffed with books? Is there any point in going into the stacks to find a book, when that same book is equally accessible from your laptop? Obviously, books are a better form factor than our laptops – five hundred years of human interface design have given us a format which is admirably well-adapted to our needs – but in most cases, accessibility trumps ease-of-use. If I can have all of the world’s books online, that easily bests the few I can access within any given library.
In a very real sense, Google is obsolescing the library, or rather, one of the features of the library, the feature we most identify with the library: book storage. Those books are now stored on servers, scattered in multiple, redundant copies throughout the world, and can be called up anywhere, at any time, from any screen. The library has been obsolesced because it has become universal; the stacks have gone virtual, sitting behind every screen. Because the idea of the library has become so successful, so universal, it no longer means anything at all. We are all within the library.
III: The Necessary Army
With the triumph of the universal library, we must now ask: What of the librarians? If librarians were simply the keepers-of-the-books, we would expect them to fade away into an obsolescence similar to the physical libraries. And though this is the popular perception of the librarian, in fact that is perhaps the least interesting of the tasks a librarian performs (although often the most visible).
The central task of the librarian – if I can be so bold as to state something categorically – is to bring order to chaos. The librarian takes a raw pile of information and makes it useful. How that happens differs from situation to situation, but all of it falls under the rubric of library science. At its most visible, the book cataloging systems used in all libraries represents the librarian’s best efforts to keep an overwhelming amount of information well-managed and well-ordered. A good cataloging system makes a library easy to use, whatever its size, however many volumes are available through its stacks.
It’s interesting to note that books.google.com uses Google’s text search-based interface. Based on my own investigations, you can’t type in a Library of Congress catalog number and get a list of books under that subject area. Google seems to have abandoned – or ignored – library science in its own book project. I can’t tell you why this is, I can only tell you that it looks very foolish and naïve. It may be that Google’s army of PhDs do not include many library scientists. Otherwise why would you have made such a beginner’s mistake? It smells of an amateur effort from a firm which is not known for amateurism.
It’s here that we can see the shape of the future, both in the immediate and longer term. People believe that because we’ve done with the library, we’re done with library science. They could not be more wrong. In fact, because the library is universal, library science now needs to be a universal skill set, more broadly taught than at any time previous to this. We have become a data-centric culture, and are presently drowning in data. It’s difficult enough for us to keep our collections of music and movies well organized; how can we propose to deal with collections that are a hundred thousand times larger?
This is not just some idle speculation; we are rapidly becoming a data-generating species. Where just a few years ago we might generate just a small amount of data on a given day or in a given week, these days we generate data almost continuously. Consider: every text message sent, every email received, every snap of a camera or camera phone, every slip of video shared amongst friends. It all adds up, and it all needs to be managed and stored and indexed and retrieved with some degree of ease. Otherwise, in a few years time the recent past will have disappeared into the fog of unsearchability. In order to have a connection to our data selves of the past, we are all going to need to become library scientists.
All of which puts you in a key position for the transformation already underway. You get to be the “life coaches” for our digital lifestyle, because, as these digital artifacts start to weigh us down (like Jacob Marley’s lockboxes), you will provide the guidance that will free us from these weights. Now that we’ve got it, it’s up to you to tell us how we find it. Now that we’ve captured it, it’s up to you to tell us how we index it.
We have already taken some steps along this journey: much of the digital media we create can now be “tagged”, that is, assigned keywords which provide context and semantic value for the media. We each create “clouds” of our own tags which evolve into “folksonomies”, or home-made taxonomies of meaning. Folksonomies and tagging are useful, but we lack the common language needed to make our digital treasures universally useful. If I tag a photograph with my own tags, that means the photograph is more useful to me; but it is not necessarily more broadly useful. Without a common, public taxonomy (a cataloging system), tagging systems will not scale into universality. That universality has value, because it allows us to extend our searches, our view, and our capability.
I could go on and on, but the basic point is this: wherever data is being created, that’s the opportunity for library science in the 21st century. Since data is being created almost absolutely everywhere, the opportunities for library science are similarly broad. It’s up to you to show us how it’s done, lest we drown in our own creations.
Some of this won’t come to pass until you move out of the libraries and into the streets. Library scientists have to prove their worth; most people don’t understand that they’re slowly drowning in a sea of their own information. This means you have to demonstrate other ways of working that are self-evident in their effectiveness. The proof of your value will be obvious. It’s up to you to throw the rest of us a life-preserver; once we’ve caught it, once we’ve caught on, your future will be assured.
The dilemma that confronts us is that for the next several years, people will be questioning the value of libraries; if books are available everywhere, why pay the upkeep on a building? Yet the value of a library is not the books inside, but the expertise in managing data. That can happen inside of a library; it has to happen somewhere. Libraries could well evolve into the resource the public uses to help manage their digital existence. Librarians will become partners in information management, indispensable and highly valued.
In a time of such radical and rapid change, it’s difficult to know exactly where things are headed. We know that books are headed online, and that libraries will follow. But we still don’t know the fate of librarians. I believe that the transition to a digital civilization will founder without a lot of fundamental input from librarians. We are each becoming archivists of our lives, but few of us have training in how to manage an archive. You are the ones who have that knowledge. Consider: the more something is shared, the more valuable it becomes. The more you share your knowledge, the more invaluable you become. That’s the future that waits for you.
Finally, consider the examples of Britannica and Europeana. The demand for those well-curated collections of information far exceeded even the wildest expectations of their creators. Something similar lies in store for you. When you announce yourselves to the broader public as the individuals empowered to help us manage our digital lives, you’ll doubtless find yourselves overwhelmed with individuals who are seeking to benefit from your expertise. What’s more, to deal with the demand, I expect Library Science to become one of the hot subjects of university curricula of the 21st century. We need you, and we need a lot more of you, if we ever hope to make sense of the wonderful wealth of data we’re creating.
If a picture paints a thousand words, you’ve just absorbed a million, the equivalent of one-and-a-half Bibles. That’s the way it is, these days. Nothing is small, nothing discrete, nothing bite-sized. Instead, we get the fire hose, 24 x 7, a world in which connection and community have become so colonized by intensity and amplification that nearly nothing feels average anymore.
Is this what we wanted? It’s become difficult to remember the before-time, how it was prior to an era of hyperconnectivity. We’ve spent the last fifteen years working out the most excellent ways to establish, strengthen and multiply the connections between ourselves. The job is nearly done, but now, as we put down our tools and pause to catch our breath, here comes the question we’ve dreaded all along…
Why. Why this?
I gave this question no thought at all as I blithely added friends to Twitter, shot past the limits of Dunbar’s Number, through the ridiculous, and then outward, approaching the sheer insanity of 1200 so-called-“friends” whose tweets now scroll by so quickly that I can’t focus on any one saying any thing because this motion blur is such that by the time I think to answer in reply, the tweet in question has scrolled off the end of the world.
This is ludicrous, and can not continue. But this is vital and can not be forgotten. And this is the paradox of the first decade of the 21st century: what we want – what we think we need – is making us crazy.
Some of this craziness is biological.
Eleven million years of evolution, back to Proconsul, the ancestor of all the hominids, have crafted us into quintessentially social creatures. We are human to the degree we are in relationship with our peers. We grew big forebrains, to hold banks of the chattering classes inside our own heads, so that we could engage these simulations of relationships in never-ending conversation. We never talk to ourselves, really. We engage these internal others in our thoughts, endlessly rehearsing and reliving all of the social moments which comprise the most memorable parts of life.
It’s crowded in there. It’s meant to be. And this has only made it worse.
No man is an island. Man is only man when he is part of a community. But we have limits. Homo Sapiens Sapiens spent two hundred thousand years exploring the resources afforded by a bit more than a liter of neural tissue. The brain has physical limits (we have to pass through the birth canal without killing our mothers) so our internal communities top out at Dunbar’s magic Number of 150, plus or minus a few.
Dunbar’s Number defines the crucial threshold between a community and a mob. Communities are made up of memorable and internalized individuals; mobs are unique in their lack of distinction. Communities can be held in one’s head, can be tended and soothed and encouraged and cajoled.
Four years ago, when I began my research into sharing and social networks, I asked a basic question: Will we find some way to transcend this biological limit, break free of the tyranny of cranial capacity, grow beyond the limits of Dunbar’s Number?
After all, we have the technology. We can hyperconnect in so many ways, through so many media, across the entire range of sensory modalities, it is as if the material world, which we have fashioned into our own image, wants nothing more than to boost our capacity for relationship.
And now we have two forces in opposition, both originating in the mind. Our old mind hews closely to the community and Dunbar’s Number. Our new mind seeks the power of the mob, and the amplification of numbers beyond imagination. This is the central paradox of the early 21st century, this is the rift which will never close. On one side we are civil, and civilized. On the other we are awesome, terrible, and terrifying. And everything we’ve done in the last fifteen years has simply pushed us closer to the abyss of the awesome.
We can not reasonably put down these new weapons of communication, even as they grind communities beneath them like so many old and brittle bones. We can not turn the dial of history backward. We are what we are, and already we have a good sense of what we are becoming. It may not be pretty – it may not even feel human – but this is things as they are.
When the historians of this age write their stories, a hundred years from now, they will talk about amplification as the defining feature of this entire era, the three hundred year span from industrial revolution to the emergence of the hyperconnected mob. In the beginning, the steam engine amplified the power of human muscle – making both human slavery and animal power redundant. In the end, our technologies of communication amplified our innate social capabilities, which eleven million years of natural selection have consistently selected for. Above and beyond all of our other natural gifts, those humans who communicate most effectively stand the greatest chance of passing their genes along to subsequent generations. It’s as simple as that. We talk our partners into bed, and always have.
The steam engine transformed the natural world into a largely artificial environment; the amplification of our muscles made us masters of the physical world. Now, the technologies of hyperconnectivity are translating the natural world, ruled by Dunbar’s Number, into the dominating influence of maddening crowd.
We are not prepared for this. We have no biological defense mechanism. We are all going to have to get used to a constant state of being which resembles nothing so much as a stack overflow, a consistent social incontinence, as we struggle to retain some aspects of selfhood amidst the constantly eroding pressure of the hyperconnected mob.
Given this, and given that many of us here today are already in the midst of this, it seems to me that the most useful tool any of us could have, moving forward into this future, is a social contextualizer. This prosthesis – which might live in our mobiles, or our nettops, or our Bluetooth headsets – will fill our limited minds with the details of our social interactions.
This tool will make explicit that long, Jacob Marley-like train of lockboxes that are our interactions in the techno-social sphere. Thus, when I introduce myself to you for the first or the fifteen hundredth time, you can be instantly brought up to date on why I am relevant, why I matter. When all else gets stripped away, each relationship has a core of salience which can be captured (roughly), and served up every time we might meet.
I expect that this prosthesis will come along sooner rather than later, and that it will rival Google in importance. Google took too much data and made it roughly searchable. This prosthesis will take too much connectivity and make it roughly serviceable. Given that we primarily social beings, I expect it to be a greater innovation, and more broadly disruptive.
And this prosthesis has precedents; at Xerox PARC they have been looking into a ‘human memory prosthesis’ for sufferers from senile dementia, a device which constantly jogs human memories as to task, place, and people. The world that we’re making for ourselves, every time we connect, is a place where we are all (in some relative sense) demented. Without this tool we will be entirely lost. We’re already slipping beneath the waves. We need this soon. We need this now.
I hope you’ll get inventive.
Now that we have comfortably settled into the central paradox of our current era, with a world that is working through every available means to increase our connectivity, and a brain that is suddenly overloaded and sinking beneath the demands of the sum total of these connections, we need to ask that question: Exactly what is hyperconnectivity good for? What new thing does that bring us?
The easy answer is the obvious one: crowdsourcing. The action of a few million hyperconnected individuals resulted in a massive and massively influential work: Wikipedia. But the examples only begin there. They range much further afield.
Uni students have been sharing their unvarnished assessments of their instructors and lecturers. Ratemyprofessors.com has become the bête noire of the academy, because researchers who can’t teach find they have no one signing up for their courses, while the best lecturers, with the highest ratings, suddenly find themselves swarmed with offers for better teaching positions at more prestigious universities. A simply and easily implemented system of crowdsourced reviews has carefully undone all of the work of the tenure boards of the academy.
It won’t be long until everything else follows. Restaurant reviews – that’s done. What about reviews of doctors? Lawyers? Indian chiefs? Politicans? ISPs? (Oh, wait, we have that with Whirlpool.) Anything you can think of. Anything you might need. All of it will have been so extensively reviewed by such a large mob that you will know nearly everything that can be known before you sign on that dotted line.
All of this means that every time we gather together in our hyperconnected mobs to crowdsource some particular task, we become better informed, we become more powerful. Which means it becomes more likely that the hyperconnected mob will come together again around some other task suited to crowdsourcing, and will become even more powerful. That system of positive feedbacks – which we are already quite in the midst of – is fashioning a new polity, a rewritten social contract, which is making the institutions of the 19th and 20th centuries – that is, the industrial era – seem as antiquated and quaint as the feudal systems which they replaced.
It is not that these institutions are dying, but rather, they now face worthy competitors. Democracy, as an example, works well in communities, but can fail epically when it scales to mobs. Crowdsourced knowledge requires a mob, but that knowledge, once it has been collected, can be shared within a community, to hyperempower that community. This tug-of-war between communities and crowds is setting all of our institutions, old and new, vibrating like taught strings.
We already have a name for this small-pieces-loosely-joined form of social organization: it’s known as anarcho-syndicalism. Anarcho-Syndicalism emerged from the labor movements that grew in numbers and power toward the end of the 19th century. Its basic idea is simply that people will choose to cooperate more often than they choose to compete, and this cooperation can form the basis for a social, political and economic contract wherein the people manage themselves.
A system with no hierarchy, no bosses, no secrets, no politics. (Well, maybe that last one is asking too much.) Anarcho-syndicalism takes as a given that all men are created equal, and therefore each have a say in what they choose to do.
Somewhere back before Australia became a nation, anarcho-syndicalist trade unions like the Industrial Workers of the World (or, more commonly, the ‘Wobblies’) fought armies of mercenaries in the streets of the major industrial cities of the world, trying get the upper hand in the battle between labor and capital. They failed because capital could outmaneuver labor in the 19th century. Today the situation is precisely reversed. Capital is slow. Knowledge is fast, the quicksilver that enlivens all our activities.
I come before you today wearing my true political colors – literally. I did not pick a red jumper and black pants by some accident or wardrobe malfunction. These are the colors of anarcho-syndicalism. And that is the new System of the World.
You don’t have to believe me. You can dismiss my political posturing as sheer radicalism. But I ask you to cast your mind further than this stage this afternoon, and look out on a world which is permanently and instantaneously hyperconnected, and I ask you – how could things go any other way? Every day one of us invents a new way to tie us together or share what we know; as that invention is used, it is copied by those who see it being used.
When we imitate the successful behaviors of our hyperconnected peers, this ‘hypermimesis’ means that we are all already in a giant collective. It’s not a hive mind, and it’s not an overmind. It’s something weirdly in-between. Connected we are smarter by far than we are as individuals, but this connection conditions and constrains us, even as it liberates us. No gift comes for free.
I assert, on the weight of a growing mountain of evidence, that anarcho-syndicalism is the place where the community meets the crowd; it is the environment where this social prosthesis meets that radical hyperempowerment of capabilities.
Let me give you one example, happening right now. The classroom walls are disintegrating (and thank heaven for that), punctured by hyperconnectivity, as the outside world comes rushing in to meet the student, and the student leaves the classroom behind for the school of the world. The student doesn’t need to be in the classroom anymore, nor does the false rigor of the classroom need to be drilled into the student. There is such a hyperabundance of instruction and information available, students needs a mentor more than a teacher, a guide through the wilderness, and not a penitentiary to prevent their journey.
Now the students, and their parents – and the teachers and instructors and administrators – need to find a new way to work together, a communion of needs married to a community of gifts. The school is transforming into an anarcho-syndicalist collective, where everyone works together as peers, comes together in a “more perfect union”, to educate. There is no more school-as-a-place-you-go-to-get-your-book-learning. School is a state of being, an act of communion.
If this is happening to education, can medicine, and law, and politics be so very far behind? Of course not. But, unlike the elites of education, these other forces will resist and resist and resist all change, until such time as they have no choice but to surrender to mobs which are smarter, faster and more flexible than they are. In twenty years time they all these institutions will be all but unrecognizable.
All of this is light-years away from how our institutions have been designed. Those institutions – all institutions – are feeling the strain of informational overload. More than that, they’re now suffering the death of a thousand cuts, as the various polities serviced by each of these institutions actually outperform them.
You walk into your doctor’s office knowing more about your condition than your doctor. You understand the implications of your contract better than your lawyer. You know more about a subject than your instructor. That’s just the way it is, in the era of hyperconnectivity.
So we must band together. And we already have. We have come together, drawn by our interests, put our shoulders to the wheel, and moved the Earth upon its axis. Most specifically, those of you in this theatre with me this arvo have made the world move, because the Web is the fulcrum for this entire transformation. In less than two decades we’ve gone from physicists plaything to rewriting the rules of civilization.
But try not to think about that too much. It could go to your head.
III. THE OTHER.
Back in July, just after Vodafone had announced its meager data plans for iPhone 3G, I wrote a short essay for Ross Dawson’s Future of Media blog. I griped and bitched and spat the dummy, summing things up with this line:
“It’s time to show the carriers we can do this ourselves.”
I recommended that we start the ‘Future Australian Carrier’, or FAUC, and proceeded to invite all of my readers to get FAUCed. A harmless little incitement to action. What could possibly go wrong?
Within a day’s time a FAUC Facebook group had been started – without my input – and I was invited to join. Over the next two weeks about four hundred people joined that group, individuals who had simply had enough grief from their carriers and were looking for something better. After that, although there was some lively discussion about a possible logo, and some research into how MVNOs actually worked, nothing happened.
About a month later, individuals began to ping me, both on Facebook and via Twitter, asking, “What happened with that carrier you were going to start, Mark? Hmm?” As if somehow, I had signed on the dotted line to be chief executive, cheerleader, nose-wiper and bottle-washer for FAUC.
All of this caught me by surprise, because I certainly hadn’t signed up to create anything. I’d floated an idea, nothing more. Yet everyone was looking to me to somehow bring this new thing into being.
After I’d been hit up a few times, I started to understand where the epic !FAIL! had occurred. And the failure wasn’t really mine. You see, I’ve come to realize a sad and disgusting little fact about all of us: We need and we need and we need.
We need others to gather the news we read. We need others to provide the broadband we so greedily lap up. We need other to govern us. And god forbid we should be asked to shoulder some of the burden. We’ll fire off a thousand excuses about how we’re so time poor even the cat hasn’t been fed in a week.
So, sure, four hundred people might sign up to a Facebook group to indicate their need for a better mobile carrier, but would any of them think of stepping forward to spearhead its organization, its cash-raising, or it leasing agreements? No. That’s all too much hard work. All any of these people needed was cheap mobile broadband.
Well, cheap don’t come cheaply.
Of course, this happens everywhere up and down the commercial chain of being. QANTAS and Telstra outsource work to southern Asia because they can’t be bothered to pay for local help, because their stockholders can’t be bothered to take a small cut in their quarterly dividends.
There’s no difference in the act itself, just in its scale. And this isn’t even raw economics. This is a case of being penny-wise and pound-foolish. Carve some profit today, spend a fortune tomorrow to recover. We see it over and over and over again (most recently and most expensively on Wall Street), but somehow the point never makes it through our thick skulls. It’s probably because we human beings find it much easier to imagine three months into the future than three years. That’s a cognitive feature which helps if you’re on the African savannah, but sucks if you’re sitting in an Australian boardroom.
So this is the other thing. The ugly thing that no one wants to look at, because to look at it involves an admission of laziness. Well folks, let me be the first one here to admit it: I’m lazy. I’m too lazy to administer my damn Qmail server, so I use Gmail. I’m too lazy to setup WebDAV, so I use Google Docs. I’m too lazy to keep my devices synced, so I use MobileMe. And I’m too lazy to start my own carrier, so instead I pay a small fortune each month to Vodafone, for lousy service.
And yes, we’re all so very, very busy. I understand this. Every investment of time is a tradeoff. Yet we seem to defer, every time, to let someone else do it for us.
And is this wise? The more I see of cloud computing, the more I am convinced that it has become a single-point-of-failure for data communications. The decade-and-a-half that I spent as a network engineer tells me that. Don’t trust the cloud. Don’t trust redundancy. Trust no one. Keep your data in the cloud if you must, but for goodness’ sake, keep another copy locally. And another copy on the other side of the world. And another under your mattress.
I’m telling you things I shouldn’t have to tell you. I’m telling you things that you already know. But the other, this laziness, it’s built into our culture. Socially, we have two states of being: community and crowd. A community can collaborate to bring a new mobile carrier into being. A crowd can only gripe about their carrier. And now, as the strict lines between community and crowd get increasingly confused because of the upswing in hyperconnectivity, we behave like crowds when we really ought to be organizing like a community.
And this, at last, is the other thing: the message I really want to leave you with. You people, here in this auditorium today, you are the masters of the world. Not your bosses, not your shareholders, not your users. You. You folks, right here and right now. The keys to the kingdom of hyperconnectivity have been given to you. You can contour, shape and control that chaotic meeting point between community and crowd. That is what you do every time you craft an interface, or write a script. Your work helps people self-organize. Your work can engage us at our laziest, and turn us into happy worker bees. It can be done. Wikipedia has shown the way.
And now, as everything hierarchical and well-ordered dissolves into the grey goo which is the other thing, you have to ask yourself, “Who does this serve?”
At the end of the day, you’re answerable to yourself. No one else is going to do the heavy lifting for you. So when you think up an idea or dream up a design, consider this: Will it help people think for themselves? Will it help people meet their own needs? Or will it simply continue to infantilize us, until we become a planet of dummy-spitting, whinging, wankers?
It’s a question I ask myself, too, a question that’s shaping the decisions I make for myself. I want to make things that empower people, so I’ve decided to take some time to work with Andy Coffey, and re-think the book for the 21st century. Yes, that sounds ridiculous and ambitious and quixotic, but it’s also a development whose time is long overdue. If it succeeds at all, we will provide a publishing platform for people to share their long-form ideas. Everything about it will be open source and freely available to use, to copy, and to hack, because I already know that my community is smarter than I am.
And it’s a question I have answered for myself in another way. This is my third annual appearance before you at Web Directions South. It will be the last time for some time. You people are my community; where I knew none of you back in 2006; I consider many of you friends in 2008. Yet, when I talk to you like this, I get the uncomfortable feeling that my community has become a crowd. So, for the next few years, let’s have someone else do the closing keynote. I want to be with my peeps, in the audience, and on the Twitter backchannel, taking the piss and trading ideas.
The future – for all of us – is the battle over the boundary between the community and the crowd. I am choosing to embrace the community. It seems the right thing to do. And as I walk off-stage here, this afternoon, I want you to remember that each of you holds the keys to the kingdom. Our community is yours to shape as you will. Everything that you do is translated into how we operate as a culture, as a society, as a civilization. It can be a coming together, or it can be a breaking apart. And it’s up to you.
I have always loved to write. As far back as I possessed the capability to scribble a coherent narrative onto a piece of paper, I’ve written stories. I remember writing a short story in third or fourth grade, about astronauts on the first voyage to Mars. Many words about the launch, a few words about the journey, then a quick, mysterious conclusion once they landed. It all ended rather badly, I recall, with just a last call for help coming across the twenty-minute delayed airwaves, before all went silent.
In my senior year of high school, when I took “advanced placement” English, I had a teacher who was both English and the mother of one of my good friends. In a small class – only about ten of us – unmercifully drilling the rules and structure of the essay into us, she points off for every misspelled word. As I have always spelled atrociously, I had to make up for it by scoring very highly on composition skills. (Thankfully, computers do our spelling for us now, which shows you the banality of the task – better automated than done by a person.) I learned to avoid the passive voice, learned to litter my texts with commas – to better approximate the cadence of the “inner voice”, and wrote a thirty-page research paper on T.S. Eliot’s “The Waste Land”, which, I now realize, I understood not at all.
At MIT, I had the good fortune to have Frank Conroy as my lecturer in a short story writing workshop. MIT, hardly known as a bastion of the humanities – except insofar as economics, the dismal science, falls under that umbrella – did have the money and the reputation to attract some of the very best writers and thinkers in the United States. William Irwin Thompson, whom I regard as one of the fundamental influences in my thinking, taught at MIT in the 1960s, if only so he could spend the next thirty years writing piercing critiques of managerial civilization. Frank Conroy, by the time I’d met him, had already received broad critical acclaim for his novel-memoire Stop-Time, which received a nomination for the National Book Award. A classic archetype of the humanities professor, with tweed jacket and pullover sweater and an unruly shock of graying hair and nicotine stains on his teeth, Conroy loved writing, and passed that love along to his students. At MIT, where most writing took place in LISP or FORTRAN, not English, that represented a Sisyphean task. Students enrolled in humanities courses not for any love of Proust or Picasso or Prokofiev, but because they had graduation requirements to fulfill. Resented as interruptions in the “real” work of mastering nature, the humanities continue thrive at MIT despite persistent institutional neglect. None of this seemed to bother Conroy; he took the shy freshmen who attended his lectures and drew them out, encouraging them to explore the world inside their heads on the written page.
That year, I won the Freshman Fiction Award at MIT – my only moment of academic distinction during my curtailed tenure there. Frank Conroy deserves the credit for that. He taught me to avoid the passive voice: “Look to Orwell. Read Nineteen Eighty-Four. You can go pages before you read a single sentence in the passive voice.” He taught me that the true stories, the best stories, come from experience. Write what you know, as clearly and capably as you can. Show, don’t tell; let the story expose itself.
I failed academically at MIT, missing many, many lectures – too depressed, some days, to get out of bed until after sunset. Nonetheless, at the end of term, writing prize in hand, I visited Conroy in his office, to thank him. He seemed genuinely surprised and touched. “I’m just sorry we didn’t have more time together,” he remarked, gently upbraiding me for my ever-more-frequent absences. “I’m leaving at the end of term.” Conroy had just received an appointment to head the Literature Program at the National Endowment for the Arts, a perch from which he would nurture an entire generation of American writers. That was the last time I ever saw him.
That last sentence, in the passive voice, marks the first you’ve read so far. Frank Conroy taught me well.
Fourteen years later, following the invention of VRML, I received an offer from a technical publishing company, New Riders Publishing, to write the very first book on VRML. I’d never thought I’d have the opportunity to write a book on any subject; in the years since dropping out of MIT, I’d become a professional software engineer. With only a few small exceptions, all of my writing took place in assembly language or ‘C’. I poured my intellect into code, banging bits, breathing life into programs. But a book about a subject near and dear to me, a subject that I (arguably) knew better than any other person, that seemed tailor-made. I accepted, without really knowing what would happen next.
I procrastinated. And procrastinated. Something about facing not just a single sheet of blank paper, but two hundred of them, freaked me out. My publisher, growing worried, finally sent me an email which simply read:
Your house is burning down.
Meaning, I suppose, that unless I delivered a manuscript, more or less immediately, that’d be an end to it. No book, no deal, no nothing. I flew to my father’s house, just outside of Boston, sat down at my laptop, and cranked out the manuscript – all 350 pages – in just 31 days.
I know that’s considered extraordinarily fast, but I’ve always written quickly. Words either come or they do not, and I can gauge my own engagement with the subject by how quickly they come. (For example, I’ve written the last thousand-or-so words in an hour’s time. That’s just about my usual rate when I’m writing.) Writing can not be forced. If it takes days to write a single paragraph, I’ve learned to recognize that I’m simply not yet ready to write. Though never fickle, my muse won’t be hurried. But, when I’m ready to write, it becomes almost impossible to avoid. The words create a strange pressure within me, wanting to pound their way out of my head and onto the page. Over the years, that pressure has driven me to produce several thousand pages of written works: books, scholarly articles, opinions and commentaries, and many, many essays.
The essay is my preferred form. It feels appropriate and very natural. From the French for “to attempt” (essayer), an essay allows the author to mix the personal and subjective with the actual and authoritative. Joan Didion – perhaps the greatest American essayist of the 20th century (and, so far, the greatest of the 21st) – combined her own neurotic and apocalyptic visions of a culture in collapse with the observational techniques of a city desk reporter to produce Slouching Towards Bethlehem, perhaps the definitive assessment of the 60’s counterculture in San Francisco. William Irwin Thompson combined his own neurotic and apocalyptic visions of a culture in collapse with the observational techniques of a ethnographer to produce Getting Back to Things at MIT, arguably the definitive humanistic critique of late Industrial Era civilization. (As a student at MIT, I received a reprint of that essay no fewer than three times – on the first day of three different humanities classes.) Hunter S. Thompson, though normally thought of as a journalist, wrote as an essayist: personal, poignant, and angry. And neurotic and apocalyptic.
Neurosis, we’ve learned, consists of a state of anxious awareness. Neurotics, intensely aware of the world around them, fear it may suddenly strike against them. Apparently, this has survival value: in times of chaos, the neurotic is the seeing man in the kingdom of the blind, and lives long enough to pass his neurotic genes along to another neurotic generation. Neurotics are nearly always apocalyptic in their thinking; the interior landscape of imminent doom, amplified across the perceptions of the psyche, become the visions of St. John of Patmos, a current of literature that flows on down through the ages to the Quetzalcoatl prophesies of Daniel Pinchbeck.
I am a neurotic, and my penchant toward the apocalyptic, well documented on YouTube and through various other media freely downloadable on the Internet, leaves little to the imagination. That I have gone quiet about various apocalyptic scenarios (for instance, I have said nothing about “2012” since a talk given at Burning Man in 2006) does not mean that I no longer entertain them. I read my variousblogs, each of which, in its own particular way, echoes my apocalyptic turn of mind. I can fantasy an oil crash, or an economic crash, a crash of civilizational over-complexity (as New Scientist did, just a few weeks ago), or dream of a sudden, machinic singularity. I can scare myself, grinning into the funhouse mirrors of my neurotic mind, and, in so doing, come back with some ideas, which, when clothed in the appropriate language, seem not so much scary as entertaining and enlightening. Neurosis as creative strength.
But it does not do to scare the horses. Although my fellow neurotics want to hear the rising winds of chaos battering at the flimsy walls of human culture, I do not want to be a prophet. Instead, reason prevails throughout my work. Although The Playful World closes on what could be read as an fairly apocalyptic note, a world where the tide of history reverses, and parents learn the new language of the world from their children (a vision which, I will note here, appears to be coming to pass), those last few pages present a vista broad enough to allow a multitude of different readings. I do not intend to scare, and if you feel your heart beating faster as you close the pages of that book, that tells you more about you than about me. I simply painted as honest picture as I knew how.
My next book – the current book – will definitively end on an apocalyptic note. I wrestled with this, for many months, until I accepted that if I tell the story in any other way, it will not feel true. The transformations in human behavior, cultural organization, and our sudden rise into hyperempowerment mean that things will be growing increasingly chaotic for some years to come. This does not necessarily mean we will be doomed to an endless “War of all against all,” as prophesied by Hobbes. Forces will rise to oppose the forces of chaos; this may well result in even more chaos, but I consider it equally likely that the dynamic opposition of well-matched hyperempowered polities will result in a new form of social stability – one which looks nothing like anything we’re familiar with today. Either way, that is apocalypse, because, whichever outcome, everything utterly changes.
I have been working toward the expression of this idea for quite some time. Looking back on Becoming Transhuman, a feature-length film/performance piece I created for MINDSTATES 2001, I can see some the themes of The Human Network in their embryonic form. This idea has been with me a while, but only now have I learned the language necessary to express it in terms comprehensible to a broad audience of people who do not share my own neurotic tendencies. The film is not the book, but points directly toward the book. The times have caught up with my own apocalyptic visions. And I have found the words which will allow me to shout “Fire!” in a crowded theatre – without starting a riot.
The Human Network opens with a basic assertion: the more something is shared, the more valuable it becomes. I can demonstrate the truth of this statement, and will do so repeatedly throughout the first several chapters. I know full well that Cory Doctorow and Charles Stross and countless other writers have put the full texts of their work online, and that this has not cannibalized their sales, but increased them. I bought Stross’ Accelerando after I downloaded the entire text, read the first chapter on my computer, and realized I needed to have a printed copy of my own. I know this works. But can I convince any potential publisher to release The Human Network freely online at publication?
Publishing, hardly the most cutting-edge of industries, has mostly been immune to the rise of social media. Yes, Harry Potter and the Deathly Hallows showed up on file-sharing sites a few days before its international release, but that didn’t impact sales at all, despite the wails of complaint from Bloomsbury Publishing. A freely available electronic copy does not seem to interfere with physical sales of printed books. Whether most publishers know this, or care to know it, remains an open question. But how can I sign any publishing deal which constrains my work in ways which, given the points I make in the text, I consider both out-of-step with the times and actually detrimental to the long-term value of the work?
Yesterday, I posted the “Overview” section of the book proposal to this blog. That, in itself, was a remarkably bold act. Book proposals are regarded as “business confidential” material by all parties to a book deal – the author, the literary agent, and the publisher. The ideas contained within the proposal – which reflect the ideas explored in the book – are meant to be kept close to the chest, until the publisher’s marketing machinery cranks up the noise before an impending release. In this sense, book marketing is a carefully scripted, but utterly false drama: “Look at this new exciting thing!” A proposal revealed undercuts this sense of drama, even as it potentially builds up an audience interested in the book. I may have shot myself in the foot by posting this portion of the proposal. I may have made it difficult, if not impossible, to get a book deal. I knew this full well, and posted it anyway.
Now things get thornier. The next sections of the proposal – which I am meant to be writing today – are synopses of the various chapters of the book. They’ll all be short, perhaps a page in length, but will explore the ideas and the narrative structure of each chapter, noting how each builds on the chapter before, giving an interested publisher a good sense of how I’ll build the argument and carry it through to a successful conclusion. This is necessary for a publisher to read, but do I want to reveal it to my audience?
While I do firmly believe in transparency, I instinctively recoil from publicly providing a Cliffs Notes version of my text, which someone could scan through and feel as though they’d absorbed the key ideas in my work. This would not be true, because books always take weird and interesting directions in the writing, directions that even the author remains unaware of until the words appear on the page. But some might think, “Oh yeah, I read his chapter synopses, I know what he’s on about.” Perhaps I shouldn’t care; perhaps these people wouldn’t read my book in any case, freely available or purchased at the bookstore. Perhaps I should simply be glad that some of the space in their heads has been colonized by my ideas. And given that I do believe – and will demonstrate in the book – that sharing expertise results in an aggregate rise in the level of human intelligence, I should be satisfied with this. It is enough.
So here, at the end of this very odd essay – quite unlike any of the others posted on The Human Network blog – you have seen me argue myself into a reasoned position for complete, radical transparency. Transparency incurs costs: people can (and will) steal your ideas, your customers, the food from your mouth. But, in order to seal my ideas, you must first comprehend them, and in understanding my ideas you’ll realize that this kind of theft is impossible. Stealing my ideas only makes them more valuable, and makes me, as the originator of these ideas, more influential. Instead, absorb my work, improve upon it, then share those new ideas. In this way, you too will become influential, and I will find myself borrowing from your work.
In March of 2008, someone – probably in India – bought a mobile telephone. By itself, that wouldn’t be particularly noteworthy, yet it represented a watershed: the halfway mark of humanity’s accelerating interconnection. Over 3.5 billion mobile subscribers, or one person in two, are wired into the global network. Most of these people live in the “developing” countries, where incomes average just a few dollars a day. Desperately poor by the standards of the “developed” world, why would these people waste their meager resources on something that, to most of us, seems little more than a useful toy?
In the developed world, mobile phones are completely ubiquitous: only toddlers, the very oldest seniors, and technophobes have resisted their allure. Parents give their children mobiles with global satellite tracking features, so they can search the web to find out where their kids are – and snoop into where they’ve been. Adults use mobile telephones to smooth the frictions of social life: in the age of the mobile, one can phone ahead. No one is late anymore, just delayed. Your productive business life can follow you anywhere – into bed, on vacation, even into the middle of an argument. We enjoy – and suffer through – a life of seamless connectivity.
This is new, and it is very important.
For the nearly two hundred thousand years of human presence on Earth, our lives have been bounded by how far we could throw our voices. Yodelers once scaled Alpine mountaintops to sing to the valleys below; today, a communications satellite, perched 25,000 miles above the equator, can reach half the planet. During the 20th century, radio transmitters (which, like yodelers, started off on mountaintops, but later migrated into orbit) transmitted one message to many receivers. We could hear and then see things that happened far away from our own ears and eyes, and know more about what happened in Washington D.C., on any given day, than what took place in the next town over. As we entered the 21st century, that comfortable (if paradoxical) relationship to the world beyond the reach of our own voices, which most of us had known for most of our lives, suddenly disintegrated. People began to talk with one another.
Nothing at all surprising about that: people have always talked with one another. Communication is arguably the defining feature of homo sapiens sapiens. We are the species that speaks. It is so much of what we are that vast sections of our brains are given over to the understanding of language. Children spend most of their first few years of life, their developing brains working overtime, intently studying every word that comes out of their parents’ mouths, learning to find meaning amidst all those strange sounds.
As a child practices her first few words, she receives encouragement and praise from her parents – who often can’t understand a word she’s saying, but nonetheless applaud every attempt. As she rises into mastery, first with a few simple words, then short phrases, then full-blown sentences, rich with meaning, she joins the “human network,” the age-old web of relationships which define humanity.
Communication shapes us in nearly every conceivable way. If we can not communicate, we are cut off from the common life of our species, and could not hope to survive. But, once we can communicate – with parents and peers – we begin to develop an ever-deepening web of connections with the people around us. This web, formally known as a “social network”, is so important to us that even more of our brain is given over to tending and managing our social networks than the parts used to understand language. Nearly all of our “prefrontal cortex” – the part of the brain which sits directly behind our foreheads – seems to be principally occupied with keeping us well-connected to our fellows.
Until about 10,000 years ago, we lived in tribes, groupings of several interrelated families who hunted and gathered their way across the landscapes of Africa, Asia, Europe and Australia. Tribes grew and shrank, through births and deaths, but never grew very large. A large tribe would divide into two smaller ones, along familial lines, and each would go their own way. The natural limit for tribes seems to be around 150 people – beyond that, the tribe always splinters. Why is this? That’s all the space we have in our brains. We can carry around a “mental picture” of about 150 people in our heads, but after that, we just run out of space. We can’t manage a social network any larger than that. We don’t have enough brains.
Fast-forward a hundred centuries: more than half of us now live in cities, not tribes. In our day-to-day lives we don’t feel immediately connected to a hundred and fifty other people. We have close relationships to our families, a handful of friends, and a few colleagues. We are more individual and more isolated than at any time in our common history as a species, yet the largest part of our brain tirelessly works toward building strong connections with others. Over the 20th century, we filled this vacuum with false relations: fans and stalkers, who so idolize their objects of affection (musicians, actors, politicians, etc.) that they built a false idol into their social networks. Ultimately unsatisfying, but better than a widening gyre of emptiness inside our heads.
Our ancestors in the family of man have used tools for at least 2 million years to increase our strength, and extend our capabilities. An obsidian knife is a far better cutting tool than our teeth, and a bone needle better suited to its task than the most nimble fingertips. We domesticated Aurochs (the ancestor of the ox) ten thousand years ago, using their strength to till our fields and carry our loads – and human capabilities took another huge leap forward.
Two hundred years ago, the steam engine multiplied human strength almost infinitely, and produced the Industrial Revolution. As railroads stitched their way across the planet, man could travel faster than a galloping horse; with a steam shovel, he could lift a load that all of Pharaoh’s slaves would have been crushed beneath; and with a telegraph, could he hear or be heard from one end of Earth to the other, in a matter of moments. Technologies are amplifiers; they take some innate human capability and reinforce it, far beyond human limits, until it seems almost an entirely new thing. However alien they might seem to us, technologies are simply the funhouse mirror reflection of ourselves.
Just now – within the last ten years, or thereabouts – we have invented tools which amplify our innate desire to strengthen our human networks. Our wholly human and ancient capacity for communication and connection, so long the poor stepchild of all our technological prowess, is finally coming into its own.
This changes everything, in utterly unexpected ways.
Fishermen in India use text messages to solve a thousand year-old problem with their fish markets, doubling their income; a teenager posts an party invitation to Facebook, and five hundred ‘friends’ show up to make trouble; repressive governments try to clamp down on dissent, only to find their latest outrage available for viewing on YouTube; a band of bloggers, undeterred by every dirty trick thrown at them by a slick bureaucracy, bring down the Attorney General of the United States. None of these singular events were in any way coordinated; no one at an imagined center was telling people to “do this” or “do that”. These things just happened, because our own capabilities as social beings in the human network are already so advanced, and so powerful that, when amplified – even the tiniest bit – we become potent almost beyond imagining.
The world’s vast swath of medium poor put mobile telephones to work and dramatically increase their ability to earn a living, using text messages to multiply the effectiveness of the human networks that we have all used, since time out of mind, to make our way in the world. That’s why a mobile phone is the new “must have” device for everyone on Earth: it’s a tool that helps the poor far more than it helps the rich, because, for the first time, they’re wired into the global human network. They already know how to use these networks – we all do – but the mobile telephone extends their reach, and amplifies their capabilities. This new “globalization” isn’t about spreading franchises of McDonald’s and Starbucks – it’s about a farmer in Kenya being able to call ahead to find out which market offers the best price for his maize crop.
Repeat that individual example a few billion times, and the startling power of the human network begins to reveal itself. We are finding new ways to communicate, connect and improve our lives, each of us carefully watching one other, each of us copying the best of what we see in the behavior of our peers, and applying it to our own lives. As our reach is extended, so is our ability to learn from one another. This global pooling of expertise – or, “hyperintelligence” – leads directly to the phenomenal success of Wikipedia, an online encyclopedia created by millions of individual contributions, each giving the best of what they know, and, in return enjoying the fruits of a planet full of smart people. For just a small contribution, the rewards are so disproportionate (like putting a single chip down on a roulette wheel, and getting the whole casino in return) that Wikipedia defines the first new model for human knowledge creation in at least a thousand years. Wikipedia helps us all to become smarter and more effective, because, by sharing the wealth of knowledge in each of our heads, we help one another make better decisions.
The more we learn to share through the human network, the more powerful we become, both as individuals and in groups. This has a shadow side: a text message, forwarded throughout a community of White Supremacists, led to a race riot on a Sydney beach in December 1995; meanwhile, the loosely-affiliated groups who all call themselves ‘al Qaeda’ pool knowledge and resources in order to make their destabilizing acts of terror increasingly effective. Power is a two-edged sword, and most technologies can be used for good or ill.
At the same time, this new phenomenon of “hyperempowerment” – people using their newly-amplified capabilities in the human network – means that we’re not so easy to push around any more. Consumers can organize against nasty corporate behavior in moments; corporate executives nervously scan endless lists of comments on web sites, anxiously looking for signs of approaching trouble; governments regularly find their constituents running rings around them. The human network puts all of the power relationships that have dominated recent history into play; naturally, those with power are pushing back, but – as in the case of the record companies, who have tried to sue their customers into behaving legally – institutional power finds itself ever more effectively thwarted by diffuse and distributed efforts to oppose it.
The next decades of the 21st century will be dominated by the rise of the human network, as “hyper people power” rises up in unexpected, unpredicted, and sometimes unwelcome ways. The collision of our oldest skills with our newest tools points toward a radical transformation in human behavior and human culture. The energy released in this collision will empower all of us, threaten many of us, and force some of us to rethink our lives. In some ways, we are finally returning to our tribal roots; in other ways, we are, at long last, becoming a global family.
After two hundred years, during which man used machines to amplify his strength, and so shaped the world, we have finally turned that power inward, to reshape ourselves. The Human Network: Sharing, Knowledge and Power in the 21st Century tells the story of this epochal shift in civilization, in behavior, in humanity itself. In its 250 pages, it will paint the compelling and accessible picture of the tremendous changes underway, everywhere, in every nation, to every person, as we all become fully-fledged actors in the human network.
One of the things I find the most exhilarating about Australia is the relative shallowness of its social networks. Where we’re accustomed to hearing about the “six degrees of separation” which connect any two individuals on Earth, in Australia we live with social networks which are, in general, about two levels deep. If I don’t know someone, I know someone who knows that someone.
While this may be slightly less true across the population as a whole (I may not know a random individual living in Kalgoorlie, and might not know someone who knows them) it is specifically quite true within any particular professional domain. After four years living in Sydney, attending and speaking at conferences throughout the nation, I’ve met most everyone involved in the so-called “new” media, and a great majority of the individuals involved in film and television production.
The most consequential of these connections sit in my address book, my endless trail of email, and my ever-growing list of Facebook friends. These connections evolve into relationships as we bat messages back and forth: emails and text messages, and links to the various interesting tidbits we find, filter and forward to those we imagine will gain the most from this informational hunting & gathering. Each transmission reinforces the bond between us – or, if I’ve badly misjudged you, ruptures that bond. The more we share with each other, the stronger the bond becomes. It becomes a covert network; invisible to the casual observer, but resilient and increasingly important to each of us. This is the network that carries gossip – Australians are great gossipers – as well as insights, opportunities, and news of the most personal sort.
In a small country, even one as geographically dispersed as Australia, this means that news travels fast. This is interesting to watch, and terrifying to participate in, because someone’s outrageous behavior is shared very quickly through these networks. Consider Roy Greenslade’s comments about Andrew Jaspan, at Friday’s “Future of Journalism” conference, which made their way throughout the nation in just a few minutes, via “live” blogs and texts, getting star billing in Friday’s Crikey. While Greenslade damned Jaspan, I was trapped in studio 21 at ABC Ultimo, shooting The New Inventors, yet I found out about his comments almost the moment I walked off set. Indeed, connected as I am to individuals such as Margaret Simmons and Rosanne Bersten (both of whom were at the conference) it would have been more surprising if I hadn’t learned about it.
All of this means that we Australians are under tremendous pressure to play nice – at least in public. Bad behavior (or, in this case, a terrifyingly honest assessment of a colleague’s qualifications) so excites the network of connections that it propagates immediately. And, within our tight little professional social networks, we’re so well connected that it propagates ubiquitously. Everyone to whom Greenslade’s comments were salient heard about them within a few minutes after he uttered them. There was a perfect meeting between the message and its intended audience.
That is a new thing.
Over the past few months, I have grown increasingly enamoured with one of the newest of the “Web2.0” toys, a site known as “Twitter”. Twitter originally billed itself as a “micro-blogging” site: you can post messages (“tweets”, in Twitter parlance) of no more than 140 characters to Twitter, and these tweets are distributed to a list of “followers”. Conversely, you are sent the tweets created by all of the individuals whom you “follow”. One of the beauties of Twitter is that it is multi-modal; you can send a tweet via text message, through a web page, or from an ever-growing range of third-party applications. Twitter makes it very easy for a bright young programmer to access Twitter’s servers – which means people are now doing all sorts of interesting things with Twitter.
At the moment, Twitter is still in the domain of the early-adopters. Worldwide, there are only about a million Twitter users, with about 200,000 active in any week – and these folks are sending an average of three million tweets a day. That may not sound like many people, but these 200,000 “Twitteratti” are among the thought-leaders in new media. Their influence is disproportionate. They may not include the CIOs of the largest institutions in the world, but they do include the folks whom those CIOs turn to for advice. And whom do these thought-leaders turn to for advice? Twitter.
A simple example: When I sat down to write this, I had no idea how many Twitter users there are at present, so I posted the following tweet:
Question: Does anyone know how many Twitter users (roughly) there are at present? Thanks!
Within a few minutes, Stilgherrian (who writes for Crikey) responded with the following:
There are 1M+ Twitter users, with 200,000 active in any week.
Stilgherrian also passed along a link to his blog where he discusses Twitter’s statistics, and muses upon his increasing reliance on the service.
Before I asked the Twitteratti my question, I did the logical thing: I searched Google. But Google didn’t have any reasonably recent results – the most recent dated from about a year ago. No love from Google. Instead, I turned to my 250-or-so Twitter followers, and asked them. Given my own connectedness in the new media community in Australia, I have, through Twitter, access to an enormous reservoir of expertise. If I don’t know the answer to a question – and I can’t find an answer online – I do know someone, somewhere, who has an answer.
Twitter, gossipy, noisy, inane and frequently meaningless, acts as my 21st-century brain trust. With Twitter I have immediate access to a broad range of very intelligent people, whose interests and capabilities overlap mine enough that we can have an interesting conversation, but not so completely that we have nothing to share with one another. Twitter extends my native capability by giving me a high degree of continuous connectivity with individuals who complement those capabilities.
That’s a new thing, too.
William Gibson, the science fiction author and keen social observer, once wrote, “The street finds its own use for things, uses the manufacturers never intended.” The true test of the value of any technology is, “Does the street care?” In the case of Twitter, the answer is a resounding “Yes!”. This personal capacity enhancement – or, as I phrase it, “hyperempowerment” – is not at all what Twitter was designed to do. It was designed to facilitate the posting of short, factual messages. The harvesting of the expertise of my oh-so-expert social network is a behavior that grew out of my continued interactions with Twitter. It wasn’t planned for, either by Twitter’s creators, or by me. It just happened. And not every Twitter user puts Twitter to this use. But some people, who see what I’m doing, will copy my behavior (which probably didn’t originate with me, though I experienced a penny-drop moment when I realized I could harvest expertise from my social network using Twitter), because it is successful. This behavior will quickly replicate, until it’s a bog-standard expectation of all Twitter users.
On Monday morning, before I sat down to write, I checked the morning’s email. Several had come in from individuals in the US, including one from my friend GregoryP, who spent the last week sweating through the creation of a presentation on the value of social media. As many of you know, companies often hire outside consultants, like GregoryP, when the boss needs to hear something that his or her underlings are too afraid say themselves. Such was the situation that GregoryP walked into, with sadly familiar results. From his blog:
As for that “secret” company – it seems fairly certain to me that I won’t be working for any dot-com pure plays in the near future. As I touched on in my Twitter account, my presentation went well but the response to it was something more than awful. As far as I could tell, the generally-absent Director of the Company wasn’t briefed on who I was or why I was there, exactly – she took the opportunity to impugn my credibility and credentials and more or less acted as if I’d tried to rip her company off.
I immediately read GregoryP’s Twitter stream, to find that he had been used, abused and insulted by the MD in question.
Which was a big, big mistake.
GregoryP is not very well connected on Twitter. He’s only just started using it. A fun little website, TweetWheel, shows all nine of his connections. But two of his connections – to Raven Zachary and myself – open into a much, much wider world of Twitteratti. Raven has over 600 people following his tweets, and I have over 250 followers. Both of us are widely-known, well-connected individuals. Both of us are good friends with GregoryP. And both of us are really upset at bad treatment he received.
Here’s how GregoryP finished off that blog post:
Let’s just say it’ll be a cold day in hell before I offer any help, friendly advice or contacts to these people. I’d be more specific about who they are but I wouldn’t want to give them any more promotion than I already have.
What’s odd here – and a sign that the penny hasn’t really dropped – is that GregoryP doesn’t really understand that “promotion” isn’t so much a beneficial influence as a chilling threat to lay waste to this company’s business prospects. This MD saw GregoryP standing before her, alone and defenseless, bearing a message that she was of no mind to receive, despite the fact that her own staff set this meeting up, for her own edification.
What this unfortunate MD did not see – because she does not “get” social media – was Raven and myself, directly connected to GregoryP. Nor does she see the hundreds of people we connect directly to, nor the tens of thousands connected directly to them. She thought she was throwing her weight around. She was wrong. She was making an ass out of herself, behaving very badly in a world where bad behavior is very, very hard to hide.
All GregoryP need do, to deliver the coup de grace, is reveal the name of the company in question. As word spread – that is, nearly instantaneously – that company would find it increasingly difficult to recruit good technology consultants, programmers, and technology marketers, because we all share our experiences. Sharing our experiences improves our effectiveness, and prevents us from making bad decisions. Such as working with this as-yet-unnamed company.
The MD walked into this meeting believing she held all the cards; in fact, GregoryP is the one with his finger poised over the launch button. With just a word, he could completely ruin her business. This utter transformation in power politics – “hyperconnectivity” leading to hyperempowerment – is another brand new thing. This brand new thing is going to change everything it touches, every institution and every relationship any individual brings to those institutions. Many of those institutions will not survive, because their reputations will not be able to withstand the glare of hyperconnectivity backed by the force of hyperempowerment.
The question before us today is not, “Who is the audience?”, but rather, “Is there anyone who isn’t in the audience?” As you can now see, a single individual – anywhere – is the entire audience. Every single person is now so well-connected that anything which happens to them or in front of them reaches everyone it needs to reach, almost instantaneously.
This newest of new things has only just started to rise up and flex its muscles. The street, ever watchful, will find new uses for it, uses that corporations, governments and institutions of every stripe will find incredibly distasteful, chaotic, and impossible to manage.
I moved to San Francisco in 1991, because I wanted to work in the brand-new field of virtual reality, and San Francisco was the epicenter of all commercial development in VR. The VR community came together for meetings of the Virtual Reality Special Interest Group at San Francisco’s Exploratorium, the world-famous science museum. These meetings included public demonstrations of the latest VR technology, interviews with thought-leaders in the field, and plenty of opportunity for networking. At one of the first of those meetings I met a man who impressed me by his sheer ordinariness. He was an accountant, and although he was enthusiastic about the possibilities of VR, he wasn’t working in the field – he was simply interested in it. Still, Craig Newmark was pleasant enough, and we’d always engage in a few lines of conversation at every meeting, although I can’t remember any of these conversations very distinctly.
Newmark met a lot of people – he was an excellent networker – and fairly quickly built up a nice list of email addresses for his contacts, whom he kept in contact with through a mailing list. This list, known as “Craig’s List”, because a de facto bulletin board for the core web and VR communities in San Francisco. People would share information about events in town, or observations, or – more frequently – they’d offer up something for sale, like a used car or a futon or an old telly.
As more people in San Francisco were sucked into the growing set of businesses which were making money from the Web, they too started reading Craig’s List, and started contributing to it. By the middle of 1995, there was too much content to be handled neatly in a mailing list, so Newmark – who, like nearly everyone else in the San Francisco Web community, had some basic web authoring skills – created a very simple web site which allowed people to post their own listings to the Web site. Newmark offered this service freely – his way of saying “thank you” to the community, and, equally important, his way of reinforcing all of the social relationships he’d built up in the last few years.
Newmark’s timing was excellent; Craigslist came online just as many, many people in San Francisco were going onto the Web, and Craigslist quickly became the community bulletin board for the city. Within a few months you could find a flat for rent, a car to drive, or a date – all in separate categories, neatly organized in the rather-ugly Web layout that characterized nearly all first-generation websites. If you had a car to sell, a flat to sublet, or you wanted a date – you went to Craigslist first. Word of mouth spread the site around, but what kept it going was the high quality of the transactions people had through the site. If you sold your bicycle through Craigslist, you’d be more likely to look there first if you wanted to buy a moped. Each successful transaction guaranteed more transactions, and more success, and so on, in a “virtuous cycle” which quickly spread beyond San Francisco to New York, Los Angeles, Seattle, and other well-connected American cities.
From the very beginning, everything on Craigslist was freely available – it nothing to list an item or to view listings. The only thing Newmark ever charged for was job listings – one of the most active areas on Craigslist, particularly in the heyday of the Web bubble. Jobs listings alone paid for all of the rest of the operational costs of Craigslist – and left Newmark with a healthy profit, which he reinvested into the business, adding capacity and expanding to other cities across America. Within a few years, Newmark had a staff of nine people, all working out of a house in San Francisco’s Sunset District – which, despite its name, is nearly always foggy.
While I knew about Craigslist – it was hard not to – I didn’t use it myself until 2000, when I left my professorial housing at the University of Southern California. I was looking for a little house in the Hollywood Hills – a beautiful forested area in the middle of the city. I went onto Craigslist and soon found a handful of listings for house rentals in the Hollywood Hills, made some calls and – within about 4 hours – had found the house of my dreams, a cute little Swiss cottage that looked as though it fell out of the pages of “Heidi”. I moved in at the beginning of June 2000, and stayed there until I moved to Sydney in 2003. It was perhaps the nicest place I’d ever lived, and I found it – quickly and efficiently – on Craigslist. My landlord swore by Craigslist; he had a number of properties, scattered throughout the Hollywood Hills, and always used Craigslist to rent his properties.
In late 2003, when I first came to Australia on a consulting contract – and before I moved here permanently – I used Craigslist again, to find people interested in sub-letting my flat while I worked in Sydney. Within a few days, I had the couple who’d created Dora the Explorer – a very popular children’s television show – living in my house, while they pursued a film deal with a major studio. When I came back to Los Angeles to settle my affairs, I sold my refrigerator on Craigslist, and hired a fellow to move the landlord’s refrigerator back into my flat – on Craigslist.
In most of the United States, Craigslist is the first stop for people interested in some sort of commercial transaction. It is now the 65th busiest website in the world, the 10th busiest in the United States – putting it up there with Yahoo!, Google, YouTube, MSN and eBay – and has about nine billion page views a month. None of the pages have advertising, nor are there any charges, except for job listings (and real estate listings in New York to keep unscrupulous realtors from flooding Craigslist with duplicate postings). Although it is still privately owned, and profits are kept secret, it’s estimated that Craigslist earns as much as USD $150 million from its job listings – while, with a staff of just 24 people, it costs perhaps a few million a year to keep the whole thing up and running. Quite a success story.
But everything has a downside. Craigslist has had an extraordinary effect on the entire publishing industry in North America. Newspapers, which funded their expensive editorial operations from the “rivers of gold” – car advertisements, job listings and classified ads – have found themselves completely “hollowed out” by Craigslist. Although the migration away from print to Craigslist began slowly, it has accelerated in the last few years, to the point where most people, in most circumstances will prefer to place a free listing in Craigslist than a paid listing in a newspaper. The listing will reach more people, and will cost them nothing to do so. That is an unbeatable economic proposition – unless you’re a newspaper.
It’s estimated that upwards of one billion dollars a year in advertising revenue is being lost to the newspapers because of Craigslist. This money isn’t flowing into Craig Newmark’s pocket – or rather, only a small amount of it is. Instead, because the marginal cost of posting an ad to Craigslist is effectively zero, Newmark is simply using the disruptive quality of pervasive network access to completely undercut the newspapers, while, at the same time, providing a better experience for his customers. This is an unbeatable economic proposition, one which is making Newmark a very rich man, even while it drives the Los Angeles Times ever closer to bankruptcy.
This is not Newmark’s fault, even if it is his doing. Newmark had the virtue of being in the right place (San Francisco) at the right time (1995) with the right idea (a community bulletin board). Everything that happened after that was driven entirely by the community of Craigslist’s users. This is not to say that Newmark isn’t incredible responsive to the needs of the Craigslist community – he is, and that responsiveness has served him well as Craigslist has grown and grown. But if Newmark hadn’t thought up this great idea, someone else would have. Nothing about Craigslist is even remotely difficult to create. A fairly ordinary web designer would be able to duplicate Craigslist’s features and functionality in less than a week’s worth of work. (But why bother? It already exists.) Newmark was servicing a need that no one even knew existed until after it had been created. Today, it seems perfectly obvious.
In a pervasively networked world, communities are fully empowered to create the resources they need to manage their lives. This act of creation happens completely outside of the existing systems of commerce (and copyright) that have formed the bulwarks of industrial age commerce. If an entire business sector gets crushed out of existence as a result, it’s barely even noticed by the community. This incredible empowerment – which I term “hyperempowerment” – is going to be one of the dominant features of public life in the 21st century. We have, as individuals and as communities, been gifted with incredible new powers – really, almost mutant ‘super powers’. We use them to achieve our own ends, without recognizing that we’ve just laid a city to waste.
Craigslist has not taken off in Australia. There are Craigslist sites for the “five capital cities” of Australia, but they’re only very infrequently visited. And, because they are only infrequently visited, they haven’t been able to build up enough content or user loyalty to create the virtuous cycle which has made Craigslist such a success in the United States. Why is this? It could be that the Trading Post has already got such a hold on the mindset of Australians that it’s the first place they think to place a listing. The Trading Post’s fees are low (fifty cents for a single non-car item), and it’s widely recognized, reaches a large community, etc. So that may be one reason.
Still, organizations like Fairfax and NEWS are scared to death of Craigslist. Back in 2004, Fairfax Digital launched Cracker.com.au, which provides free listings for everything except cars and jobs, which point back into the various paid advertising Fairfax websites. Australian newspaper publishers have already consigned classified advertising to the dustbin of history; they’re just waiting for the axe to fall. When it does, the Trading Post – among the most valuable of Testra/Sensis properties – will be almost entirely worthless. Telstra’s stockholders will scream, but the Australian public at large won’t care – they’ll be better served by a freely available resource which they’ve created and which they use to improve their business relations within Australia.
Case Two: Listings
In order to preserve business confidentiality, I won’t mention the name of my first Australian client, but they’re a well-known firm, publishers of traveler’s guides. The travel business, when I came to it in early 2006, was nearly unchanged from its form of the last fifty years: you send a writer to a far-away place, where they experience the delights and horrors of life, returning home to put it all into a manuscript which is edited, fact-checked, copy-edited, typeset, published and distributed. Book publishing is a famously human-intensive process – it takes an average of eighteen months for a book from a mainstream publisher to reach the marketplace, because each of these steps take time, effort and a lot of dollars. Nevertheless, a travel guide might need to be updated only twice a decade, and with global distribution it has always been fairly easy to recover the investment.
When I first met with my client, they wanted to know what might figure into the future of publishing. It turns out they knew the answer better than I did: they quickly pointed me to a new website, TripAdvisor.com. Although it is a for-profit website – earning money from bookings made through it – the various reviews and travel information provided on TripAdvisor.com are “user generated content,” that is, provided by folks who use TripAdvisor.com. Thus, a listing for a particular hotel will contain many reviews from people who have actually stayed at the hotel, each of whom have their own peccadilloes, needs, and interests. Reading through a handful of the reviews for any given hotel will give you a fairly rounded idea of what the establishment is really like.
This model of content creation and distribution is the exact opposite of the time-honored model practiced by travel publishers. Instead of an authoritative reviewer, the reviewing task is “crowdsourced” – literally given over to the community of users – to handle. The theory is that with enough reviews, some cogent body of opinion would emerge. While this seems fanciful on the face of it, it’s been proven time and again that this is an entirely successful model of knowledge production. Wikipedia, for example, has built an entire and entirely authoritative encyclopedia from user contributions – a body of knowledge far larger and at least as accurate as its nearest competitor, Encyclopaedia Britannica.
It’s still common for businesses to distrust user generated content. Movie studios nicknamed it “loser generated content”, even as their audiences turn from the latest bloated blockbuster toward YouTube. Britannica pooh-poohed Wikipedia , until an article in Nature, that bastion of scientific reporting, indicated that, on average, a Wikipedia article was nearly as accurate as a given article in Britannica. (This report came out in December 2005. Today, it’s likely an article in Wikipedia would be more accurate than an article in Britannica.) In short, businesses reject the “wisdom of crowds” at their peril.
We’ve only just discovered that a well-networked body politics has access to deep reservoirs of very specific knowledge; in some peculiar way, we are all boffins. We might be science boffins, or knitting boffins, or gearheads or simply know everything that’s ever been said about Stoner Rock. It doesn’t matter. We all have passions, and now that we have a way of sharing these passions with the world-at-large, this “collective intelligence” far outclasses the particulars of any professional organization seeking to serve up little slices of knowledge. This is a general challenge confronting all businesses and institutions in the 21st century. It’s quite commonplace today for a patient to walk into a doctor’s surgery knowing more about the specifics of an illness than the doctor does; this “Wikimedicine” is disparaged by medical professionals – but the truth is that an energized and well-networked community generally does serve its members better than any particular professional elite.
So what to do about about travel publishing in the era of TripAdvisor.com, and WikiTravel (another source of user-generated tourist information), and so on. How can a business possibly hope to compete with the community it hopes to profitably serve? When the question is put like this, it seems insoluable. But that simply indicates that the premise is flawed. This is not an us-versus-them situation, and here’s the key: the community, any community, respects expertise that doesn’t attempt to put on the airs of absolute authority. That travel publisher has built up an enormous reservoir of goodwill and brand recognition, and, simply by changing its attitude, could find a profitable way to work with the community. Publishers are no longer treated like Moses, striding down from Mount Sinai, commandments in hand. Publishing is a conversation, a deep engagement with the community of interest, where all parties are working as hard as they can to improve the knowledge and effectiveness of the community as a whole.
That simple transition from shoveling books out the door, into a community of knowledge building, has far reaching consequences. The business must refashion its own editorial processes and sensibilities around the community. Some of the job of winnowing the wheat from the chaff must be handed to the community, because there’s far too much for the editors to handle on their own. Yet the editors must be able to identify the best work of the community, and give that work pride of place, in order to improve the perceived value their role within the community.
Does this mean that the travel guide book is dead? A book is not dynamic or flexible, unlike a website. But neither does a book need batteries or an internet connection. Books have evolved through half a millennium of use to something that we find incredibly useful – even when resources are available online, we often prefer to use books. They are comfortable and very portable.
The book itself may be changing. It may not be something that is mass produced in lots of tens of thousands; rather, it may be individually printed for a community member, drawn from their own needs and interests. It represents their particular position and involvement, and is thus utterly personal. The technology for single-run publishing is now widespread; it isn’t terribly to print a single copy of a book. When that book can reflect the best editorial efforts of a brand known for high-quality travel publications plus the very best of the reviews and tips offered by an ever-growing community of travelers, it becomes something greater than the sum of its parts, a document in progress, an on-going evolution toward greater utility. It is an encapsulation of a conversation at a particular moment in time, necessarily incomplete, but, for that reason, intensely valuable.
Conversation is the mode not just for business communications, but for all business in the 21st century. Businesses which can not seize on the benefits of communication with the communities they serve will simply be swept aside (like newspapers) by communities in conversation. It is better to be in front of that wave, leading the way, than to drown in the riptide. But this is not an easy transition to make. It involves the fundamental rethinking of business practices and economic models. It’s a choice that will confront every business, everywhere, sometime in the next few years.
Case Three: Delisted
My final case study involves a recent client of mine, a very large university in New South Wales. I was invited in by the Director of Communications, to consult on a top-down redesign of the university’s web presence. After considerable effort an expenditure, the university had learned that their website was more-or-less unusable, particularly when compared against its competitors. It took users too many clicks to find the information they wanted, and that information wasn’t collated well, forcing visitors to traverse the site over and over to find the information they might want on a particular program of study. The new design would streamline the site, consolidate resources, and help prospective students quickly locate the information they would need to make their educational decisions.
That was all well and good, but a cursory investigation of web usage at the university indicated a larger and more fundamental problem: students had simply stopped using the online resources provided by the university, beyond the bare minimum needed to register for classes. The university had failed to keep up with innovations in the Web, falling dramatically out-of-step with its student population, who are all deeply engaged in emailing, social networking, blogging, photo sharing, link sharing, video sharing, and crowdsourcing. Even more significantly, the faculty of the university had set up many unauthorized web sites – using university computing resources – to provide web services that the university had not been able to offer. Both students and faculty had “left the farm” in search of the richer pastures found outside the carefully maintained walls of university computing. This collapse in utility has led to a “vicious cycle,” for the less the student or faculty member uses university resources, the less relevant they become, moving in a downward spiral which eventually sees all of the important knowledge creation processes of the university happening outside its bounds.
As the relevant information about the university (except what the university says about itself) escapes the confines of university resources, another serious consequence emerges: search engines no longer put the university at the top of search queries, simply because the most relevant information about the university is no longer hosted by the university. The organization has lost control of the conversation because it neglected to stay engaged in that conversation, tracking where and how its students and faculty were using the tools at hand to engage themselves in the processes of learning and knowledge formation. A Google search on a particular programme at the university could turn up a student’s assessment of the program as the first most relevant result, not the university’s authorized page.
This is a bigger problem than the navigability of a website, because it directly challenges the university’s authority to speak for itself. In the United States, the website RateMyProfessors.com has become the bane of all educational institutions, because students log onto the site and provide (reasonably) accurate information about the pedagogical capabilities of their instructors. An instructor who is a great researcher but a lousy teacher is quickly identified on this site, and students steer clear, having learned from their peers the pitfalls of a bad decision. On the other hand, students flock to lectures by the best lecturers, and these professors become hot items, either promoted to stay in place, or lured away by strong counter-offers. The collective intelligence of the community is running the show now, and that voice will only become stronger as better tools are developed to put it to work.
What could I offer as a solution for my client? All I could do was proscribe some bitter medicine. Yes, I told them, go forward with the website redesign – it is both necessary and useful. But I advised them to use that redesign as a starting point for a complete rethink of the services offered by the university. Students should be able to blog, share media, collaborate and create knowledge within the confines of the university, and it should be easier to do that – anywhere – than the alternative. Only when the grass is greener in the paddock will they be able to bring the students and faculty back onto the farm.
Furthermore, I advised the university to create the space for conversation within the university. Yes, some of it will be defamatory, or vile, or just unpleasant to hear. But the alternative – that this conversation happens elsewhere, outside of your ability to monitor and respond to it – would eventually prove catastrophic. Educational institutions everywhere – and all other institutions – are facing similar choices: do they ignore their constituencies or engage with them? Once engaged, how does that change the structure and power flows within their institutions? Can these institutions reorganize themselves, so that they become more permeable, pliable and responsive to the communities which they serve?
One again, these are not easy questions to answer. They touch on the fundamental nature of institutions of all varieties. A commercial organization has to confront these same questions, though the specifics will vary from organization to organization. The larger an organization grows, the louder the cry for conversation grows, and the more pressing its need. The largest institutions in Australia are most vulnerable to this sudden change in attitudes, because here it is most likely that sudden self-organizations within the body politic will rise to challenge them.
As you can see, the same themes appear and reappear in each of these three case studies. In each case some industry sector or institution confronts a pervasively networked public which can out-think, out-maneuver and massively out-compete an institution which formed in an era before the rise of the network. The balance of power has shifted decisively into the hands of the networked public.
The natural reaction of institutions of all stripes is to resist these changes; institutions are inherently conservative, seeking to cling to what has worked in the past, even if the past is no longer any guide to the future. Let me be very clear on this point: resistance is futile, and worse, the longer you resist, the stronger the force you will confront. If you attempt to dam up the tide of change, you will only ensure that the ensuing deluge will be that much greater. The pressure is rising; we are already pervasively networked in Australia, with nearly every able adult owning a mobile phone, with massive and growing broadband penetration, and with an increasing awareness that communities can self-organize to serve their own needs.
Something’s got to give. And it’s not going to be the public. They can’t be whipped or cowed or forced back into antique behaviors which no longer make sense to them. Instead, it is up to you, as business leaders, to embrace the public, engaging them in a continuous conversation that will utterly transform the way you do business.
No business is ever guaranteed success, but unless you embrace conversation as the essential business practice of the 21st century, you will find someone else, more flexible and more open, stealing your business away. It might be a competitor, or it might be your customers themselves, fed up with the old ways of doing business, and developing new ways to meet their own needs. Either way, everything is about to change.
Those of you reading this blog – and I know who you are, as I have extensive server logs and data analysis tools – may have noticed that the title of this blog recently changed from “hyperpeople” to “the human network”. This was done for two reasons. First, my personal blog is also called “hyperpeople”, and I wanted to disambiguate between the two, if only for my own sanity. Second, because the book – and yes, I have finally decided that there will be a book – will have the title The Human Network: Sharing, Knowledge and Power in the 21st Century.
Now all I need to do is get a proposal written, approved by my agent, purchased by a publisher, generate a manuscript, go through an extensive editing, copy-editing and layout process, convince the publisher’s marketing arm that this book will sell beyond their wildest dreams, arrange for press, publicity and a book tour, and sit back and let the tens of dollars roll in. Then we’ll be cooking with gas…