Blue Skies

I: Cloud People

I want to open this afternoon’s talk with a story about my friend Kate Carruthers.  Kate is a business strategist, currently working at Hyro, over in Surry Hills.  In November, while on a business trip to Far North Queensland, Kate pulled out her American Express credit card to pay for a taxi fare.  Her card was declined.  Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more important – and found her card declined once again.

As it turned out, American Express had cut Kate’s credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post.  So here’s Kate, far away from home, with a crook credit card.  Thank goodness she had another card with her, or it could have been quite a problem.  When she contacted American Express to discuss that credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division.  That, for Kate, was the last straw.  She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

One after another, Kate sent this stream of messages out to her Twitter followers.  All of her Twitter followers.  Kate’s been on Twitter for a long time – well over three years – and she’s accumulated a lot of followers.  Currently, she has over 8300 followers, although at the time she had her American Express meltdown, the number was closer to 7500.

Let’s step back and examine this for a moment.  Kate is, in most respects, a perfectly ordinary (though whip-smart) human being.  Yet she now has this ‘cloud’ of connections, all around her, all the time, through Twitter.  These 8300 people are at least vaguely aware of whatever she chooses to share in her tweets.  They care enough to listen, even if they are not always listening very closely.  A smaller number of individuals (perhaps a few hundred, people like me) listen more closely.  Nearly all the time we’re near a computer or a mobile, we keep an eye on Kate.  (Not that she needs it.  She’s thoroughly grown up.  But if she ever got into a spot of trouble or needed a bit of help, we’d be on it immediately.)

This kind of connectivity is unprecedented in human history.  We came from villages where perhaps a hundred of us lived close enough together that there were no secrets.  We moved to cities where the power of numbers gave us all a degree of anonymity, but atomized us into disconnected individuals, lacking the social support of a community.  Now we come full circle.  This is the realization of the ‘Global Village’ that Marshall McLuhan talked about fifty years ago.  At the time McLuhan though of television as a retribalizing force.  It wasn’t.  But Facebook and Twitter and the mobiles each of us carry with us during all our waking hours?  These are the new retribalizing forces, because they keep us continuously connected with one another, allowing us to manage connections in every-greater numbers.

Anything Kate says, no matter how mundane, is now widely known.  But it’s more than that.  Twitter is text, but it is also links that can point to images, or videos, or songs, or whatever you can digitize and upload to the Web.  Kate need simply drop a URL into a tweet and suddenly nearly ten thousand people are aware of it.  If they like it, they will send it along (‘re-tweet’ is the technical term), and it will spread out quickly, like waves on a pond.

But Twitter isn’t a one-way street.  Kate is ‘following’ 7250 individuals; that is, she’s receiving tweets from them.  That sounds like a nearly impossible task: how can you pay attention to what that many people have to say?  It’d be like trying to listen to every conversation at Central Station (or Flinders Street Station) at peak hour.  Madness.  And yet, it is possible.  Tools have been created that allow you to keep a pulse on the madness, to stick a toe into the raging torrent of commentary.

Why would you want to do this?  It’s not something that you need to do (or even want to do) all the time, but there are particular moments – crisis times – when Twitter becomes something else altogether.  After an earthquake or other great natural disaster, after some pivotal (or trivial) political event, after some stunning discovery.  The 5650 people I follow are my connection to all of that.  My connection is broad enough that someone, somewhere in my network is nearly always nearly the first to know something, among the first to share what they know.  Which means that I too, if I am paying attention, am among the first to know.

Businesses have been built on this kind of access.  An entire sector of the financial services industry, from DowJones to Bloomberg, has thrived because it provides subscribers with information before others have it – information that can be used on a trading floor.  This kind of information freely comes to the very well-connected.  This kind of information can be put to work to make you more successful as an individual, in your business, or in whatever hobbies you might pursue.  And it’s always there.  All you need do is plug into it.

When you do plug into it, once you’ve gotten over the initial confusion, and you’ve dedicated the proper time and tending to your network, so that it grows organically and enthusiastically, you will find yourself with something amazingly flexible and powerful.  Case in point: in December I found myself in Canberra for a few days.  Where to eat dinner in a town that shuts down at 5 pm?  I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia.  A few days later, in the Barossa, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed.  These may seem like trivial instances – though they’re the difference between a good holiday and a lackluster one – but what they demonstrate is that Twitter has allowed me to plug into all of the expertise of all of the thousands of people I am connected to.  Human brainpower, multiplied by 5650 makes me smarter, faster, and much, much more effective.  Why would I want to live any other way?  Twitter can be inane, it can be annoying, it can be profane and confusing and chaotic, but I can’t imagine life without it, just as I can’t imagine life without the Web or without my mobile.  The idea that I am continuously connected and listening to a vast number of other people – even as they listen to me – has gone from shocking to comfortable in just over three years.

Kate and I are just the leading edge.  Where we have gone, all of the rest of you will soon follow.  We are all building up our networks, one person at a time.  A child born in 2010 will spend their lifetime building up a social network.  They’ll never lose track of any individual they meet and establish a connection with.  That connection will persist unless purposely destroyed.  Think of the number of people you meet throughout your lives, who you establish some connection with, even if only for a few hours.  That number would easily reach into the thousands for every one of us.  Kate and I are not freaks, we’re simply using the bleeding edge of a technology that will be almost invisible and not really worth mentioning by 2020.

All of this means that the network is even more alluring than it was a few years ago, and will become ever more alluring with the explosive growth in social networks.  We are just at the beginning of learning how to use these new social networks.  First we kept track of friends and family.  Then we moved on to business associates.  Now we’re using them to learn, to train ourselves and train others, to explore, to explain, to help and to ask for help.  They are becoming a new social fabric which will knit us together into an unfamiliar closeness.  This is already creating some interesting frictions for us.  We like being connected, but we also treasure the moments when we disconnect, when we can’t be reached, when our time and our thoughts are our own.  We preach focus to our children, but find our time and attention increasing divided by devices that demand service: email, Web, phone calls, texts, Twitter, Facebook, all of it brand new, and all of it seemingly so important that if we ignore any of them we immediately feel the cost.  I love getting away from it all.  I hate the backlog of email that greets me when I return.  Connecting comes with a cost.  But it’s becoming increasingly impossible to imagine life without it.

II: Eyjafjallajökull

I recently read a most interesting blog postChase Saunders, a software architect and entrepreneur in Maine (not too far from where I was born) had a bit of a brainwave and decided to share it with the rest of the world.  But you may not like it.  Saunders begins with: “For me to get really mad at a company, it takes more than a lousy product or service: it’s the powerlessness I feel when customer service won’t even try to make things right.  This happens to me about once a year.”  Given the number of businesses we all interact with in any given year – both as consumers and as client businesses – this figure is far from unusual.  There will be times when we get poor value for money, or poor service, or a poor response time, or what have you.  The world is a cruel place.  It’s what happens after that cruelty which is important: how does the business deal with an upset customer?  If they fail the upset customer, that’s when problems can really get out of control.

In times past, an upset customer could cancel their account, taking their business elsewhere.  Bad, but recoverable.  These days, however, customers have more capability, precisely because of their connectivity.  And this is where things start to go decidedly pear-shaped.  Saunders gets to the core of his idea:

Let’s say you buy a defective part from ACME Widgets, Inc. and they refuse to refund or replace it.  You’re mad, and you want the world to know about this awful widget.  So you pop over to AdRevenge and you pay them a small amount. Say $3.  If the company is handing out bad widgets, maybe some other people have already done this… we’ll suppose that before you got there, one guy donated $1 and another lady also donated $1.  So now we have 3 people who have paid a total of $5 to warn other potential customers about this sketchy company…the 3 vengeful donations will go to the purchase of negative search engine advertising.  The ads are automatically booked and purchased by the website…

And there it is.  Your customers – your angry customers – have found an effective way to band together and warn every other potential customer just how badly you suck, and will do it every time your name gets typed into a search engine box.  And they’ll do it whether or not their complaints are justified.  In fact, your competitors could even game the system, stuffing it up with lots of false complaints.  It will quickly become complete, ugly chaos.

You’re probably all donning your legal hats, and thinking about words like ‘libel’ and ‘defamation’.  Put all of that out of your mind.  The Internet is extraterritorial, it and effectively ungovernable, despite all of the neat attempts of governments from China to Iran to Australia to stuff it back into some sort of box.  Ban AdRevenge somewhere, it pops up somewhere else – just as long as there’s a demand for it.  Other countries – perhaps Iceland or Sweden, and certainly the United States – don’t have the same libel laws as Australia, yet their bits freely enter the nation over the Internet.  There is no way to stop AdRevenge or something very much like AdRevenge from happening.  No way at all.  Resign yourself to this, and embrace it, because until you do you won’t be able to move on, into a new type of relationship with your customers.

Which brings us back to our beginning, and a very angry Kate Carruthers.  Here she is, on a Friday night in Far North Queensland, spilling quite a bit of bile out onto Twitter.  Everyone one of the 7500 people who read her tweets will bear her experience in mind the next time they decide whether they will do any business with American Express.  This is damage, probably great damage to the reputation of American Express, damage that could have been avoided, or at least remediated before Kate ‘went nuclear’.

But where was American Express when all of this was going on?  While Kate expressed her extreme dissatisfaction with American Express, its own marketing arm was busily cooking up a scheme to harness Twitter.  It’s Open Forum Pulse website shows you tweets from small businesses around the world.  Ironic, isn’t it? American Express builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it.  So the fire rages, uncontrolled, while American Express fiddles.

There are other examples.  On Twitter, one of my friends lauded the new VAustralia Premium Economy service to the skies, while VAustralia ran some silly marketing campaign that had four blokes sending three thousand tweets over two days in Los Angeles.  Sure, I want to tune into that stream of dreck and drivel.  That’s exactly what I’m looking for in the age of information overload: more crap.

This is it, the fundamental disconnect, the very heart of the matter.  We all need to do a whole lot less talking, and a whole lot more listening.  That’s true for each of us as individuals: we’re so well-connected now that by the time we do grow into a few thousand connections we’d be wiser listening than speaking, most of the time.  But this is particularly true for businesses, which make their living dealing with customers.  The relationship between businesses and their customers has historically been characterized by a ‘throw it over the wall’ attitude.  There is no wall, anywhere.  The customer is sitting right beside you, with a megaphone pointed squarely into your ear.

If we were military planners, we’d call this ‘asymmetric warfare’.  Instead, we should just give it the name it rightfully deserves: 21st-century business.  It’s a battlefield out there, but if you come prepared for a 20th-century conflict – massive armies and big guns – you’ll be overrun by the fleet-footed and omnipresent guerilla warfare your customers will wage against you – if you don’t listen to them.  Like volcanic ash, it may not present a solid wall to prevent your progress.  But it will jam up your engines, and stop you from getting off the ground.

Listening is not a job.  There will be no ‘Chief Listening Officer’, charged with keeping their ear down to the ground, wondering if the natives are becoming restless, ready to sound the alarm when a situation threatens to go nuclear.  There is simply too much to listen to, happening everywhere, all at once.  Any single point which presumed to do the listening for an entire organization – whether an individual or a department – will simply be overwhelmed, drowning in the flow of data.  Listening is not a job: it is an attitude.  Every employee from the most recently hired through to the Chief Executive must learn to listen.  Listen to what is being said internally (therein lies the path to true business success) and learn to listen to what others, outside the boundaries of the organization, are saying about you.

Employees already regularly check into their various social networks.  Right now we think of that as ‘slacking off’, not something that we classify as work.  But if we stretch the definition just a bit, and begin to recognize that the organization we work for is, itself, part of our social network, things become clearer.  Someone can legitimately spend time on Facebook, looking for and responding to issues as they arise.  Someone can be plugged into Twitter, giving it continuous partial attention all day long, monitoring and soothing customer relationships.  And not just someone.  Everyone.  This is a shared responsibility.  Working for the organization means being involved with and connected to the organization’s customers, past, present and future.  Without that connection, problems will inevitably arise, will inevitably amplify, will inevitably result in ‘nuclear events’.  Any organization (or government, or religion) can only withstand so many nuclear events before it begins to disintegrate.  So this isn’t a matter of choice.  This is a basic defensive posture.  An insurance policy, of sorts, protecting you against those you have no choice but to do business with.

Yet this is not all about defense.  Listening creates opportunity.  I get some of my best ideas – such as that AdRevenge article – because I am constantly listening to others’ good ideas.  Your customers might grumble, but they also praise you for a job well done.  That positive relationship should be honored – and reinforced.  As you reinforce the positive, you create a virtuous cycle of interactions which becomes terrifically difficult to disrupt.  When that’s gone on long enough, and broadly enough, you have effectively raised up your own army – in the post-modern, guerilla sense of the word – who will go out there and fight for you and your brand when the haters and trolls and chaos-makers bear down upon you.  These people are connected to you, and will connect to one another because of the passion they share around your products and your business.  This is another network, an important network, an offensive network, and you need both defensive and offensive strategies to succeed on this playing field.

Just as we as individuals are growing into hyperconnectivity, so our businesses must inevitably follow.  Hyperconnected individuals working with disconnected businesses is a perfect recipe for confusion and disaster.  Like must meet with like before the real business of the 21st-century can begin.

III: Services With a Smile

Moving from the abstract to the concrete, let’s consider the types of products and services required in our densely hyperconnected world.  First and foremost, we are growing into a pressing, almost fanatical need for continuous connectivity.  Wherever we are – even in airplanes – we must be connected.  The quality of that connection – its speed, reliability, and cost – are important co-factors to consider, and it is not always the cheapest connection which serves the customer best.  I pay a premium for my broadband connection because I can send the CEO of my ISP a text any time my link goes down – and my trouble tickets are sorted very rapidly!  Conversely, I went with a lower-cost carrier for my mobile service, and I am paying the price, with missed calls, failed data connections, and crashes on my iPhone.

As connectivity becomes more important, reliability crowds out other factors.  You can offer a premium quality service at a premium price and people will adopt it, for the same reason they will pay more for a reliable car, or for electricity from a reliable supplier, or for food that they’re sure will be wholesome.  Connectivity has become too vital to threaten.  This means there’s room for healthy competition, as providers offer different levels of service at different price points, competing on quality, so that everyone gets the level of service they can afford.  But uptime always will be paramount.

What service, exactly is on offer?  Connectivity comes in at least two flavors: mobile and broadband.  These are not mutually exclusive.  When we’re stationary we use broadband; when we’re in motion we use mobile services.  The transition between these two networks should be invisible and seamless as possible – as pioneered by Apple’s iPhone.

At home, in the office, at the café or library, in fact, in almost any structure, customers should have access to wireless broadband.  This is one area where Australia noticeably trails the rest of the world.  The tariff structure for Internet traffic has led Australians to be unusually conservative with their bits, because there is a specific cost incurred for each bit sent or received.  While this means that ISPs should always have the funding to build out their networks to handle increases in capacity, it has also meant that users protect their networks from use in order to keep costs down.  This fundamental dilemma has subjected wireless broadband in Australia to a subtle strangulation.  We do not have the ubiquitous free wireless access that many other countries – in particular, the United States – have on offer, and this consequently alters our imagination of the possibilities for ubiquitous networking.

Tariffs are now low enough that customers ought to be encouraged to offer wireless networking to the broader public.  There are some security concerns that need to be addressed to make this safe for all parties, but these are easily dealt with.  There is no fundamental barrier to pervasive wireless broadband.  It does not compete with mobile data services.  Rather, as wireless broadband becomes more ubiquitous, people come to rely on continuous connectivity ever more.  Mobile data demand will grow in lockstep as more wireless broadband is offered.  Investment in wireless broadband is the best way to ensure that mobile data services continue to grow.

Mobile data services are best characterized principally by speed and availability.  Beyond a certain point – perhaps a megabit per second – speed is not an overwhelming lure on a mobile handset.  It’s nice but not necessary.  At that point, it’s much more about provisioning: how will my carrier handle peak hour in Flinders Street Station (or Central Station)?  Will my calls drop?  Will I be able to access my cloud-based calendar so that I can grab a map and a phone number to make dinner reservations?  If a customer finds themselves continually frustrated in these activities, one of two things will happen: either the mobile will go back into the pocket, more or less permanently, or the customer will change carriers.  Since the customer’s family, friends and business associates will not be putting their own mobiles back into their pockets, it is unlikely that any customer will do so for any length of time, irrespective of the quality of their mobile service.  If the carrier will not provision, the customers must go elsewhere.

Provisioning is expensive.  But it is also the only sure way to retain your customers.  A customer will put up with poor customer service if they know they have reliable service.  A customer will put up with a higher monthly spend if they have a service they know they can depend upon in all circumstances.  And a customer will quickly leave a carrier who can not be relied upon.  I’ve learned that lesson myself.  Expect it to be repeated, millions of times over, in the years to come, as carriers, regrettably and avoidably, find that their provisioning is inadequate to support their customers.

Wireless is wonderful, and we think of it as a maintenance-free technology, at least from the customer’s point of view.  Yet this is rarely so.  Last month I listened to a talk by Genevieve Bell, Intel Fellow and Lead Anthropologist at the chipmaker.  Her job is to spend time in the field – across Europe and the developing world – observing  how people really use technology when it escapes into the wild.  Several years ago she spent some time in Singapore, studying how pervasive wireless broadband works in the dense urban landscape of the city-state.  In any of Singapore’s apartment towers – which are everywhere – nearly everyone has access to very high speed wired broadband (perhaps 50 megabits per second) – which is then connected to a wireless router to distribute the broadband throughout the apartment.  But wireless is no great respecter of walls.  Even in my own flat in Surry Hills I can see nine wireless networks from my laptop, including my own.  In a Singapore tower block, the number is probably nearer to twenty or thirty.

Genevieve visited a family who had recently purchased a wireless printer.  They were dissatisfied with it, pronouncing it ‘possessed’.  What do you mean? she inquired.  Well, they explained, it doesn’t print what they tell it to print.  But it does print other things.  Things they never asked for.  The family called for a grandfather to come over and practice his arts of feng shui, hoping to rid the printer of its evil spirits.  The printer, now repositioned to a more auspicious spot, still misbehaved.  A few days later, a knock came on the door.  Outside stood a neighbor, a sheaf of paper in his hands, saying, “I believe these are yours…?”

The neighbor had also recently purchased a wireless printer, and it seems that these two printers had automatically registered themselves on each other’s networks.  Automatic configuration makes wireless networks a pleasure to use, but it also makes for botched configurations and flaky communication.  Most of this is so far outside the skill set of the average consumer that these problems will never be properly remedied.  The customer might make a support call, and maybe – just maybe the problem will be solved.  Or, the problem will persist, and the customer will simply give up.  Even with a support call, wireless networks are often so complex that the problem can’t be wholly solved.

As wireless networks grow more pervasive, Genevieve Bell recommends that providers offer a high-quality hand-holding and diagnostic service to their customers.  They need to offer a ‘tune up’ service that will travel to the customer once a year to make sure everything is running well.  Consumers need to be educated that wireless networks do not come for free.  Like anything else, they require maintenance, and the consumer should come to expect that it will cost them something, every year, to keep it all up and running.  In this, a wireless network is no different than a swimming pool or a lawn.  There is a future for this kind of service: if you don’t offer it, your competitors soon will.

Finally, let me close with what the world looks like when all of these services are working perfectly.  Lately, I’ve become a big fan of Foursquare, a ‘location-based social network’.  Using the GPS on my iPhone, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else.  Once I’ve checked in, I can make a recommendation – a ‘tip’ in Foursquare lingo – or simply look through the tips provided by those who have been there before me.  This list of tips is quickly growing longer, more substantial, and more useful.  I can walk into a bar that I’ve never been to before and know exactly which cocktail I want to order.  I know which table at the restaurant offers the quietest corner for a romantic date.  I know which salesperson to talk to for a good deal on that mobile handset.  And so on.  I have immediate and continuous information in depth, and I put that information to work, right now, to make my life better.

The world of hyperconnectivity isn’t some hypothetical place we’ll never see.  We are living in it now.  The seeds of the future are planted in the present.  But the shape of the future is determined by our actions today.  It is possible to blunt and slow Australia’s progress into this world with bad decisions and bad services.  But it is also possible to thrust the nation into global leadership if we can embrace the inevitable trend toward hyperconnectivity, and harness it.  It has already transformed our lives.  It will transform our businesses, our schools, and our government.  You are the carriers of that change.  Your actions will bring this new world into being.

What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

The Alexandrine Dilemma

I: Crash Through or Crash

We live in a time of wonders, and, more often than not, remain oblivious to them until they fail catastrophically. On the 19th of October, 1999 we saw such a failure. After years of preparation, on that day the web-accessible version of Encyclopedia Britannica went on-line. The online version of Britannica contained the complete, unexpurgated content of the many-volume print edition, and it was freely available, at no cost to its users.

I was not the only person who dropped by on the 19th to sample Britannica’s wares. Several million others joined me – all at once. The Encyclopedia’s few servers suddenly succumbed to the overload of traffic – the servers crashed, the network connections crashed, everything crashed. When the folks at Britannica conducted a forensic analysis of the failure, they learned something shocking: the site had crashed because, within its first hours, it had attracted nearly fifty million visitors.

The Web had never seen anything like that before. Yes, there were search engines such as Yahoo! and AltaVista (and even Google), but destination websites never attracted that kind of traffic. Britannica, it seemed, had tapped into a long-standing desire for high-quality factual information. As the gold-standard reference work in the English language, Britannica needed no advertising to bring traffic to its web servers – all it need do was open its doors. Suddenly, everyone doing research, or writing a paper, or just plain interested in learning more about something tried to force themselves through Britannica’s too narrow doorway.

Encyclopedia Britannica ordered some more servers, and installed a bigger pipe to the Internet, and within a few weeks was back in business. Immediately Britannica became one of the most-trafficked sites on the Web, as people came through in search of factual certainty. Yet for all of that traffic, Britannica somehow managed to lose money.

The specifics of this elude my understanding. The economics of the Web are very simple: eyeballs equals money. The more eyeballs you have, the more money you earn. That’s as true for Google as for Britannica. Yet, somehow, despite having one of the busiest websites in the world, Britannica lost money. For that reason, just a few month after it freely opened its doors to the public, Britannica hid itself behind a “paywall”, asking seven dollars a month as a fee to access its inner riches. Immediately, traffic to Britannica dropped to perhaps a hundredth of its former numbers. Britannica did not convert many of its visitors to paying customers: there may be a strong desire for factual information, but even so, most people did not consider it worth paying for. Instead, individuals continued to search for a freely available, high quality source of factual information.

Into this vacuum Wikipedia was born. The encyclopedia that anyone can edit has always been freely available, and, because of its use of the Creative Commons license, can be freely copied. Wikipedia was the modern birth of “crowdsourcing”, the idea that vast numbers of anonymous individuals can labor together (at a distance) on a common project. Wikipedia’s openness in every respect – transparent edits, transparent governance, transparent goals – encouraged participation. People were invited to come by and sample the high-quality factual information on offer – and were encouraged to leave their own offerings. The high-quality facts encouraged visitors; some visitors would leave their own contributions, high-quality facts which would encourage more visitors, and so, in a “virtuous cycle”, Wikipedia grew as large as, then far larger than Encyclopedia Britannica.

Today, we don’t even give a thought to Britannica. It may be the gold-standard reference work in the English language, but no one cares. Wikipedia is good enough, accurate enough (although Wikipedia was never intended to be a competitor to Britannica by 2005 Nature was doing comparative testing of article accuracy) and is much more widely available. Britannica has had its market eaten up by Wikipedia, a market it dominated for two hundred years. It wasn’t the server crash that doomed Britannica; when the business minds at Britannica tried to crash through into profitability, that’s when they crashed into the paywall they themselves established. Watch carefully: over the next decade we’ll see the somewhat drawn out death of Britannica as it becomes ever less relevant in a Wikipedia-dominated landscape.

Just a few weeks ago, the European Union launched a new website, Europeana. Europeana is a repository, a collection of cultural heritage of Europe, made freely available to everyone in the world via the Web. From Descartes to Darwin to Debussy, Europeana hopes to become the online cultural showcase of European thought.

The creators of Europeana scoured Europe’s cultural institutions for items to be digitized and placed within its own collection. Many of these institutions resisted their requests – they didn’t see any demand for these items coming from online communities. As it turns out, these institutions couldn’t have been more wrong. Europeana launched on the 20th of November, and, like Britannica before it, almost immediately crashed. The servers overloaded as visitors from throughout the EU came in to look at the collection. Europeana has been taken offline for a few months, as the EU buys more servers and fatter pipes to connect it all to the Internet. Sometime late in 2008 it will relaunch, and, if its brief popularity is any indication, we can expect Europeana to become another important online resource, like Wikipedia.

All three of these examples prove that there is an almost insatiable interest in factual information made available online, whether the dry articles of Wikipedia or the more bouncy cultural artifacts of Europeana. It’s also clear that arbitrarily restricting access to factual information simply directs the flow around the institution restricting access. Britannica could be earning over a hundred million dollars a year from advertising revenue – that’s what it is projected that Wikipedia could earn, just from banner advertisements, if it ever accepted advertising. But Britannica chose to lock itself away from its audience. That is the one unpardonable sin in the network era: under no circumstances do you take yourself off the network. We all have to sink or swim, crash through or crash, in this common sea of openness.

I only hope that the European museums who have donated works to Europeana don’t suddenly grow possessive when the true popularity of their works becomes a proven fact. That will be messy, and will only hurt the institutions. Perhaps they’ll heed the lesson of Britannica; but it seems as though many of our institutions are mired in older ways of thinking, where selfishness and protecting the collection are seen as a cardinal virtues. There’s a new logic operating: the more something is shared, the more valuable it becomes.

II: The Universal Library

Just a few weeks ago, Google took this idea to new heights. In a landmark settlement of a long-running copyright dispute with book publishers in the United States, Google agreed to pay a license fee to those publishers for their copyrights – even for books out of print. In return, the publishers are allowing Google to index, search and display all of the books they hold under copyright. Google already provides the full text of many books which have an expired copyright – their efforts scanning whole libraries at Harvard and Stanford has given Google access to many such texts. Each of these texts is indexed and searchable – just as with the books under copyright, but, in this case, the full text is available through Google’s book reader tool. For works under copyright but out-of-print, Google is now acting as the sales agent, translating document searches into book sales for the publishers, who may now see huge “long tail” revenues generated from their catalogues.

Since Google is available from every computer connected to the Internet (given that it is available on most mobile handsets, it’s available to nearly every one of the four billion mobile subscribers on the planet), this new library – at least seven million volumes – has become available everywhere. The library has become coextensive with the Internet.

This was an early dream both of the pioneers of the personal computing, and, later, of the Web. When CD-ROM was introduced, twenty years ago, it was hailed as the “new papyrus,” capable of storing vast amounts of information in a richly hyperlinked format. As the limits of CD-ROM became apparent, the Web became the repository of the hopes of all the archivists and bibliophiles who dreamed of a new Library of Alexandria, a universal library with every text in every tongue freely available to all.

We have now gotten as close to that ideal as copyright law will allow; everything is becoming available, though perhaps not as freely as a librarian might like. (For libraries, Google has established subscription-based fees for access to books covered by copyright.) Within another few years, every book within arm’s length of Google (and Google has many, many arms) will be scanned, indexed and accessible through books.google.com. This library can be brought to bear everywhere anyone sits down before a networked screen. This library can serve billions, simultaneously, yet never exhaust its supply of texts.

What does this mean for the library as we have known it? Has Google suddenly obsolesced the idea of a library as a building stuffed with books? Is there any point in going into the stacks to find a book, when that same book is equally accessible from your laptop? Obviously, books are a better form factor than our laptops – five hundred years of human interface design have given us a format which is admirably well-adapted to our needs – but in most cases, accessibility trumps ease-of-use. If I can have all of the world’s books online, that easily bests the few I can access within any given library.

In a very real sense, Google is obsolescing the library, or rather, one of the features of the library, the feature we most identify with the library: book storage. Those books are now stored on servers, scattered in multiple, redundant copies throughout the world, and can be called up anywhere, at any time, from any screen. The library has been obsolesced because it has become universal; the stacks have gone virtual, sitting behind every screen. Because the idea of the library has become so successful, so universal, it no longer means anything at all. We are all within the library.

III: The Necessary Army

With the triumph of the universal library, we must now ask: What of the librarians? If librarians were simply the keepers-of-the-books, we would expect them to fade away into an obsolescence similar to the physical libraries. And though this is the popular perception of the librarian, in fact that is perhaps the least interesting of the tasks a librarian performs (although often the most visible).

The central task of the librarian – if I can be so bold as to state something categorically – is to bring order to chaos. The librarian takes a raw pile of information and makes it useful. How that happens differs from situation to situation, but all of it falls under the rubric of library science. At its most visible, the book cataloging systems used in all libraries represents the librarian’s best efforts to keep an overwhelming amount of information well-managed and well-ordered. A good cataloging system makes a library easy to use, whatever its size, however many volumes are available through its stacks.

It’s interesting to note that books.google.com uses Google’s text search-based interface. Based on my own investigations, you can’t type in a Library of Congress catalog number and get a list of books under that subject area. Google seems to have abandoned – or ignored – library science in its own book project. I can’t tell you why this is, I can only tell you that it looks very foolish and naïve. It may be that Google’s army of PhDs do not include many library scientists. Otherwise why would you have made such a beginner’s mistake? It smells of an amateur effort from a firm which is not known for amateurism.

It’s here that we can see the shape of the future, both in the immediate and longer term. People believe that because we’ve done with the library, we’re done with library science. They could not be more wrong. In fact, because the library is universal, library science now needs to be a universal skill set, more broadly taught than at any time previous to this. We have become a data-centric culture, and are presently drowning in data. It’s difficult enough for us to keep our collections of music and movies well organized; how can we propose to deal with collections that are a hundred thousand times larger?

This is not just some idle speculation; we are rapidly becoming a data-generating species. Where just a few years ago we might generate just a small amount of data on a given day or in a given week, these days we generate data almost continuously. Consider: every text message sent, every email received, every snap of a camera or camera phone, every slip of video shared amongst friends. It all adds up, and it all needs to be managed and stored and indexed and retrieved with some degree of ease. Otherwise, in a few years time the recent past will have disappeared into the fog of unsearchability. In order to have a connection to our data selves of the past, we are all going to need to become library scientists.

All of which puts you in a key position for the transformation already underway. You get to be the “life coaches” for our digital lifestyle, because, as these digital artifacts start to weigh us down (like Jacob Marley’s lockboxes), you will provide the guidance that will free us from these weights. Now that we’ve got it, it’s up to you to tell us how we find it. Now that we’ve captured it, it’s up to you to tell us how we index it.

We have already taken some steps along this journey: much of the digital media we create can now be “tagged”, that is, assigned keywords which provide context and semantic value for the media. We each create “clouds” of our own tags which evolve into “folksonomies”, or home-made taxonomies of meaning. Folksonomies and tagging are useful, but we lack the common language needed to make our digital treasures universally useful. If I tag a photograph with my own tags, that means the photograph is more useful to me; but it is not necessarily more broadly useful. Without a common, public taxonomy (a cataloging system), tagging systems will not scale into universality. That universality has value, because it allows us to extend our searches, our view, and our capability.

I could go on and on, but the basic point is this: wherever data is being created, that’s the opportunity for library science in the 21st century. Since data is being created almost absolutely everywhere, the opportunities for library science are similarly broad. It’s up to you to show us how it’s done, lest we drown in our own creations.

Some of this won’t come to pass until you move out of the libraries and into the streets. Library scientists have to prove their worth; most people don’t understand that they’re slowly drowning in a sea of their own information. This means you have to demonstrate other ways of working that are self-evident in their effectiveness. The proof of your value will be obvious. It’s up to you to throw the rest of us a life-preserver; once we’ve caught it, once we’ve caught on, your future will be assured.

The dilemma that confronts us is that for the next several years, people will be questioning the value of libraries; if books are available everywhere, why pay the upkeep on a building? Yet the value of a library is not the books inside, but the expertise in managing data. That can happen inside of a library; it has to happen somewhere. Libraries could well evolve into the resource the public uses to help manage their digital existence. Librarians will become partners in information management, indispensable and highly valued.

In a time of such radical and rapid change, it’s difficult to know exactly where things are headed. We know that books are headed online, and that libraries will follow. But we still don’t know the fate of librarians. I believe that the transition to a digital civilization will founder without a lot of fundamental input from librarians. We are each becoming archivists of our lives, but few of us have training in how to manage an archive. You are the ones who have that knowledge. Consider: the more something is shared, the more valuable it becomes. The more you share your knowledge, the more invaluable you become. That’s the future that waits for you.

Finally, consider the examples of Britannica and Europeana. The demand for those well-curated collections of information far exceeded even the wildest expectations of their creators. Something similar lies in store for you. When you announce yourselves to the broader public as the individuals empowered to help us manage our digital lives, you’ll doubtless find yourselves overwhelmed with individuals who are seeking to benefit from your expertise. What’s more, to deal with the demand, I expect Library Science to become one of the hot subjects of university curricula of the 21st century. We need you, and we need a lot more of you, if we ever hope to make sense of the wonderful wealth of data we’re creating.