Blue Skies

I: Cloud People

I want to open this afternoon’s talk with a story about my friend Kate Carruthers.  Kate is a business strategist, currently working at Hyro, over in Surry Hills.  In November, while on a business trip to Far North Queensland, Kate pulled out her American Express credit card to pay for a taxi fare.  Her card was declined.  Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more important – and found her card declined once again.

As it turned out, American Express had cut Kate’s credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post.  So here’s Kate, far away from home, with a crook credit card.  Thank goodness she had another card with her, or it could have been quite a problem.  When she contacted American Express to discuss that credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division.  That, for Kate, was the last straw.  She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

One after another, Kate sent this stream of messages out to her Twitter followers.  All of her Twitter followers.  Kate’s been on Twitter for a long time – well over three years – and she’s accumulated a lot of followers.  Currently, she has over 8300 followers, although at the time she had her American Express meltdown, the number was closer to 7500.

Let’s step back and examine this for a moment.  Kate is, in most respects, a perfectly ordinary (though whip-smart) human being.  Yet she now has this ‘cloud’ of connections, all around her, all the time, through Twitter.  These 8300 people are at least vaguely aware of whatever she chooses to share in her tweets.  They care enough to listen, even if they are not always listening very closely.  A smaller number of individuals (perhaps a few hundred, people like me) listen more closely.  Nearly all the time we’re near a computer or a mobile, we keep an eye on Kate.  (Not that she needs it.  She’s thoroughly grown up.  But if she ever got into a spot of trouble or needed a bit of help, we’d be on it immediately.)

This kind of connectivity is unprecedented in human history.  We came from villages where perhaps a hundred of us lived close enough together that there were no secrets.  We moved to cities where the power of numbers gave us all a degree of anonymity, but atomized us into disconnected individuals, lacking the social support of a community.  Now we come full circle.  This is the realization of the ‘Global Village’ that Marshall McLuhan talked about fifty years ago.  At the time McLuhan though of television as a retribalizing force.  It wasn’t.  But Facebook and Twitter and the mobiles each of us carry with us during all our waking hours?  These are the new retribalizing forces, because they keep us continuously connected with one another, allowing us to manage connections in every-greater numbers.

Anything Kate says, no matter how mundane, is now widely known.  But it’s more than that.  Twitter is text, but it is also links that can point to images, or videos, or songs, or whatever you can digitize and upload to the Web.  Kate need simply drop a URL into a tweet and suddenly nearly ten thousand people are aware of it.  If they like it, they will send it along (‘re-tweet’ is the technical term), and it will spread out quickly, like waves on a pond.

But Twitter isn’t a one-way street.  Kate is ‘following’ 7250 individuals; that is, she’s receiving tweets from them.  That sounds like a nearly impossible task: how can you pay attention to what that many people have to say?  It’d be like trying to listen to every conversation at Central Station (or Flinders Street Station) at peak hour.  Madness.  And yet, it is possible.  Tools have been created that allow you to keep a pulse on the madness, to stick a toe into the raging torrent of commentary.

Why would you want to do this?  It’s not something that you need to do (or even want to do) all the time, but there are particular moments – crisis times – when Twitter becomes something else altogether.  After an earthquake or other great natural disaster, after some pivotal (or trivial) political event, after some stunning discovery.  The 5650 people I follow are my connection to all of that.  My connection is broad enough that someone, somewhere in my network is nearly always nearly the first to know something, among the first to share what they know.  Which means that I too, if I am paying attention, am among the first to know.

Businesses have been built on this kind of access.  An entire sector of the financial services industry, from DowJones to Bloomberg, has thrived because it provides subscribers with information before others have it – information that can be used on a trading floor.  This kind of information freely comes to the very well-connected.  This kind of information can be put to work to make you more successful as an individual, in your business, or in whatever hobbies you might pursue.  And it’s always there.  All you need do is plug into it.

When you do plug into it, once you’ve gotten over the initial confusion, and you’ve dedicated the proper time and tending to your network, so that it grows organically and enthusiastically, you will find yourself with something amazingly flexible and powerful.  Case in point: in December I found myself in Canberra for a few days.  Where to eat dinner in a town that shuts down at 5 pm?  I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia.  A few days later, in the Barossa, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed.  These may seem like trivial instances – though they’re the difference between a good holiday and a lackluster one – but what they demonstrate is that Twitter has allowed me to plug into all of the expertise of all of the thousands of people I am connected to.  Human brainpower, multiplied by 5650 makes me smarter, faster, and much, much more effective.  Why would I want to live any other way?  Twitter can be inane, it can be annoying, it can be profane and confusing and chaotic, but I can’t imagine life without it, just as I can’t imagine life without the Web or without my mobile.  The idea that I am continuously connected and listening to a vast number of other people – even as they listen to me – has gone from shocking to comfortable in just over three years.

Kate and I are just the leading edge.  Where we have gone, all of the rest of you will soon follow.  We are all building up our networks, one person at a time.  A child born in 2010 will spend their lifetime building up a social network.  They’ll never lose track of any individual they meet and establish a connection with.  That connection will persist unless purposely destroyed.  Think of the number of people you meet throughout your lives, who you establish some connection with, even if only for a few hours.  That number would easily reach into the thousands for every one of us.  Kate and I are not freaks, we’re simply using the bleeding edge of a technology that will be almost invisible and not really worth mentioning by 2020.

All of this means that the network is even more alluring than it was a few years ago, and will become ever more alluring with the explosive growth in social networks.  We are just at the beginning of learning how to use these new social networks.  First we kept track of friends and family.  Then we moved on to business associates.  Now we’re using them to learn, to train ourselves and train others, to explore, to explain, to help and to ask for help.  They are becoming a new social fabric which will knit us together into an unfamiliar closeness.  This is already creating some interesting frictions for us.  We like being connected, but we also treasure the moments when we disconnect, when we can’t be reached, when our time and our thoughts are our own.  We preach focus to our children, but find our time and attention increasing divided by devices that demand service: email, Web, phone calls, texts, Twitter, Facebook, all of it brand new, and all of it seemingly so important that if we ignore any of them we immediately feel the cost.  I love getting away from it all.  I hate the backlog of email that greets me when I return.  Connecting comes with a cost.  But it’s becoming increasingly impossible to imagine life without it.

II: Eyjafjallajökull

I recently read a most interesting blog postChase Saunders, a software architect and entrepreneur in Maine (not too far from where I was born) had a bit of a brainwave and decided to share it with the rest of the world.  But you may not like it.  Saunders begins with: “For me to get really mad at a company, it takes more than a lousy product or service: it’s the powerlessness I feel when customer service won’t even try to make things right.  This happens to me about once a year.”  Given the number of businesses we all interact with in any given year – both as consumers and as client businesses – this figure is far from unusual.  There will be times when we get poor value for money, or poor service, or a poor response time, or what have you.  The world is a cruel place.  It’s what happens after that cruelty which is important: how does the business deal with an upset customer?  If they fail the upset customer, that’s when problems can really get out of control.

In times past, an upset customer could cancel their account, taking their business elsewhere.  Bad, but recoverable.  These days, however, customers have more capability, precisely because of their connectivity.  And this is where things start to go decidedly pear-shaped.  Saunders gets to the core of his idea:

Let’s say you buy a defective part from ACME Widgets, Inc. and they refuse to refund or replace it.  You’re mad, and you want the world to know about this awful widget.  So you pop over to AdRevenge and you pay them a small amount. Say $3.  If the company is handing out bad widgets, maybe some other people have already done this… we’ll suppose that before you got there, one guy donated $1 and another lady also donated $1.  So now we have 3 people who have paid a total of $5 to warn other potential customers about this sketchy company…the 3 vengeful donations will go to the purchase of negative search engine advertising.  The ads are automatically booked and purchased by the website…

And there it is.  Your customers – your angry customers – have found an effective way to band together and warn every other potential customer just how badly you suck, and will do it every time your name gets typed into a search engine box.  And they’ll do it whether or not their complaints are justified.  In fact, your competitors could even game the system, stuffing it up with lots of false complaints.  It will quickly become complete, ugly chaos.

You’re probably all donning your legal hats, and thinking about words like ‘libel’ and ‘defamation’.  Put all of that out of your mind.  The Internet is extraterritorial, it and effectively ungovernable, despite all of the neat attempts of governments from China to Iran to Australia to stuff it back into some sort of box.  Ban AdRevenge somewhere, it pops up somewhere else – just as long as there’s a demand for it.  Other countries – perhaps Iceland or Sweden, and certainly the United States – don’t have the same libel laws as Australia, yet their bits freely enter the nation over the Internet.  There is no way to stop AdRevenge or something very much like AdRevenge from happening.  No way at all.  Resign yourself to this, and embrace it, because until you do you won’t be able to move on, into a new type of relationship with your customers.

Which brings us back to our beginning, and a very angry Kate Carruthers.  Here she is, on a Friday night in Far North Queensland, spilling quite a bit of bile out onto Twitter.  Everyone one of the 7500 people who read her tweets will bear her experience in mind the next time they decide whether they will do any business with American Express.  This is damage, probably great damage to the reputation of American Express, damage that could have been avoided, or at least remediated before Kate ‘went nuclear’.

But where was American Express when all of this was going on?  While Kate expressed her extreme dissatisfaction with American Express, its own marketing arm was busily cooking up a scheme to harness Twitter.  It’s Open Forum Pulse website shows you tweets from small businesses around the world.  Ironic, isn’t it? American Express builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it.  So the fire rages, uncontrolled, while American Express fiddles.

There are other examples.  On Twitter, one of my friends lauded the new VAustralia Premium Economy service to the skies, while VAustralia ran some silly marketing campaign that had four blokes sending three thousand tweets over two days in Los Angeles.  Sure, I want to tune into that stream of dreck and drivel.  That’s exactly what I’m looking for in the age of information overload: more crap.

This is it, the fundamental disconnect, the very heart of the matter.  We all need to do a whole lot less talking, and a whole lot more listening.  That’s true for each of us as individuals: we’re so well-connected now that by the time we do grow into a few thousand connections we’d be wiser listening than speaking, most of the time.  But this is particularly true for businesses, which make their living dealing with customers.  The relationship between businesses and their customers has historically been characterized by a ‘throw it over the wall’ attitude.  There is no wall, anywhere.  The customer is sitting right beside you, with a megaphone pointed squarely into your ear.

If we were military planners, we’d call this ‘asymmetric warfare’.  Instead, we should just give it the name it rightfully deserves: 21st-century business.  It’s a battlefield out there, but if you come prepared for a 20th-century conflict – massive armies and big guns – you’ll be overrun by the fleet-footed and omnipresent guerilla warfare your customers will wage against you – if you don’t listen to them.  Like volcanic ash, it may not present a solid wall to prevent your progress.  But it will jam up your engines, and stop you from getting off the ground.

Listening is not a job.  There will be no ‘Chief Listening Officer’, charged with keeping their ear down to the ground, wondering if the natives are becoming restless, ready to sound the alarm when a situation threatens to go nuclear.  There is simply too much to listen to, happening everywhere, all at once.  Any single point which presumed to do the listening for an entire organization – whether an individual or a department – will simply be overwhelmed, drowning in the flow of data.  Listening is not a job: it is an attitude.  Every employee from the most recently hired through to the Chief Executive must learn to listen.  Listen to what is being said internally (therein lies the path to true business success) and learn to listen to what others, outside the boundaries of the organization, are saying about you.

Employees already regularly check into their various social networks.  Right now we think of that as ‘slacking off’, not something that we classify as work.  But if we stretch the definition just a bit, and begin to recognize that the organization we work for is, itself, part of our social network, things become clearer.  Someone can legitimately spend time on Facebook, looking for and responding to issues as they arise.  Someone can be plugged into Twitter, giving it continuous partial attention all day long, monitoring and soothing customer relationships.  And not just someone.  Everyone.  This is a shared responsibility.  Working for the organization means being involved with and connected to the organization’s customers, past, present and future.  Without that connection, problems will inevitably arise, will inevitably amplify, will inevitably result in ‘nuclear events’.  Any organization (or government, or religion) can only withstand so many nuclear events before it begins to disintegrate.  So this isn’t a matter of choice.  This is a basic defensive posture.  An insurance policy, of sorts, protecting you against those you have no choice but to do business with.

Yet this is not all about defense.  Listening creates opportunity.  I get some of my best ideas – such as that AdRevenge article – because I am constantly listening to others’ good ideas.  Your customers might grumble, but they also praise you for a job well done.  That positive relationship should be honored – and reinforced.  As you reinforce the positive, you create a virtuous cycle of interactions which becomes terrifically difficult to disrupt.  When that’s gone on long enough, and broadly enough, you have effectively raised up your own army – in the post-modern, guerilla sense of the word – who will go out there and fight for you and your brand when the haters and trolls and chaos-makers bear down upon you.  These people are connected to you, and will connect to one another because of the passion they share around your products and your business.  This is another network, an important network, an offensive network, and you need both defensive and offensive strategies to succeed on this playing field.

Just as we as individuals are growing into hyperconnectivity, so our businesses must inevitably follow.  Hyperconnected individuals working with disconnected businesses is a perfect recipe for confusion and disaster.  Like must meet with like before the real business of the 21st-century can begin.

III: Services With a Smile

Moving from the abstract to the concrete, let’s consider the types of products and services required in our densely hyperconnected world.  First and foremost, we are growing into a pressing, almost fanatical need for continuous connectivity.  Wherever we are – even in airplanes – we must be connected.  The quality of that connection – its speed, reliability, and cost – are important co-factors to consider, and it is not always the cheapest connection which serves the customer best.  I pay a premium for my broadband connection because I can send the CEO of my ISP a text any time my link goes down – and my trouble tickets are sorted very rapidly!  Conversely, I went with a lower-cost carrier for my mobile service, and I am paying the price, with missed calls, failed data connections, and crashes on my iPhone.

As connectivity becomes more important, reliability crowds out other factors.  You can offer a premium quality service at a premium price and people will adopt it, for the same reason they will pay more for a reliable car, or for electricity from a reliable supplier, or for food that they’re sure will be wholesome.  Connectivity has become too vital to threaten.  This means there’s room for healthy competition, as providers offer different levels of service at different price points, competing on quality, so that everyone gets the level of service they can afford.  But uptime always will be paramount.

What service, exactly is on offer?  Connectivity comes in at least two flavors: mobile and broadband.  These are not mutually exclusive.  When we’re stationary we use broadband; when we’re in motion we use mobile services.  The transition between these two networks should be invisible and seamless as possible – as pioneered by Apple’s iPhone.

At home, in the office, at the café or library, in fact, in almost any structure, customers should have access to wireless broadband.  This is one area where Australia noticeably trails the rest of the world.  The tariff structure for Internet traffic has led Australians to be unusually conservative with their bits, because there is a specific cost incurred for each bit sent or received.  While this means that ISPs should always have the funding to build out their networks to handle increases in capacity, it has also meant that users protect their networks from use in order to keep costs down.  This fundamental dilemma has subjected wireless broadband in Australia to a subtle strangulation.  We do not have the ubiquitous free wireless access that many other countries – in particular, the United States – have on offer, and this consequently alters our imagination of the possibilities for ubiquitous networking.

Tariffs are now low enough that customers ought to be encouraged to offer wireless networking to the broader public.  There are some security concerns that need to be addressed to make this safe for all parties, but these are easily dealt with.  There is no fundamental barrier to pervasive wireless broadband.  It does not compete with mobile data services.  Rather, as wireless broadband becomes more ubiquitous, people come to rely on continuous connectivity ever more.  Mobile data demand will grow in lockstep as more wireless broadband is offered.  Investment in wireless broadband is the best way to ensure that mobile data services continue to grow.

Mobile data services are best characterized principally by speed and availability.  Beyond a certain point – perhaps a megabit per second – speed is not an overwhelming lure on a mobile handset.  It’s nice but not necessary.  At that point, it’s much more about provisioning: how will my carrier handle peak hour in Flinders Street Station (or Central Station)?  Will my calls drop?  Will I be able to access my cloud-based calendar so that I can grab a map and a phone number to make dinner reservations?  If a customer finds themselves continually frustrated in these activities, one of two things will happen: either the mobile will go back into the pocket, more or less permanently, or the customer will change carriers.  Since the customer’s family, friends and business associates will not be putting their own mobiles back into their pockets, it is unlikely that any customer will do so for any length of time, irrespective of the quality of their mobile service.  If the carrier will not provision, the customers must go elsewhere.

Provisioning is expensive.  But it is also the only sure way to retain your customers.  A customer will put up with poor customer service if they know they have reliable service.  A customer will put up with a higher monthly spend if they have a service they know they can depend upon in all circumstances.  And a customer will quickly leave a carrier who can not be relied upon.  I’ve learned that lesson myself.  Expect it to be repeated, millions of times over, in the years to come, as carriers, regrettably and avoidably, find that their provisioning is inadequate to support their customers.

Wireless is wonderful, and we think of it as a maintenance-free technology, at least from the customer’s point of view.  Yet this is rarely so.  Last month I listened to a talk by Genevieve Bell, Intel Fellow and Lead Anthropologist at the chipmaker.  Her job is to spend time in the field – across Europe and the developing world – observing  how people really use technology when it escapes into the wild.  Several years ago she spent some time in Singapore, studying how pervasive wireless broadband works in the dense urban landscape of the city-state.  In any of Singapore’s apartment towers – which are everywhere – nearly everyone has access to very high speed wired broadband (perhaps 50 megabits per second) – which is then connected to a wireless router to distribute the broadband throughout the apartment.  But wireless is no great respecter of walls.  Even in my own flat in Surry Hills I can see nine wireless networks from my laptop, including my own.  In a Singapore tower block, the number is probably nearer to twenty or thirty.

Genevieve visited a family who had recently purchased a wireless printer.  They were dissatisfied with it, pronouncing it ‘possessed’.  What do you mean? she inquired.  Well, they explained, it doesn’t print what they tell it to print.  But it does print other things.  Things they never asked for.  The family called for a grandfather to come over and practice his arts of feng shui, hoping to rid the printer of its evil spirits.  The printer, now repositioned to a more auspicious spot, still misbehaved.  A few days later, a knock came on the door.  Outside stood a neighbor, a sheaf of paper in his hands, saying, “I believe these are yours…?”

The neighbor had also recently purchased a wireless printer, and it seems that these two printers had automatically registered themselves on each other’s networks.  Automatic configuration makes wireless networks a pleasure to use, but it also makes for botched configurations and flaky communication.  Most of this is so far outside the skill set of the average consumer that these problems will never be properly remedied.  The customer might make a support call, and maybe – just maybe the problem will be solved.  Or, the problem will persist, and the customer will simply give up.  Even with a support call, wireless networks are often so complex that the problem can’t be wholly solved.

As wireless networks grow more pervasive, Genevieve Bell recommends that providers offer a high-quality hand-holding and diagnostic service to their customers.  They need to offer a ‘tune up’ service that will travel to the customer once a year to make sure everything is running well.  Consumers need to be educated that wireless networks do not come for free.  Like anything else, they require maintenance, and the consumer should come to expect that it will cost them something, every year, to keep it all up and running.  In this, a wireless network is no different than a swimming pool or a lawn.  There is a future for this kind of service: if you don’t offer it, your competitors soon will.

Finally, let me close with what the world looks like when all of these services are working perfectly.  Lately, I’ve become a big fan of Foursquare, a ‘location-based social network’.  Using the GPS on my iPhone, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else.  Once I’ve checked in, I can make a recommendation – a ‘tip’ in Foursquare lingo – or simply look through the tips provided by those who have been there before me.  This list of tips is quickly growing longer, more substantial, and more useful.  I can walk into a bar that I’ve never been to before and know exactly which cocktail I want to order.  I know which table at the restaurant offers the quietest corner for a romantic date.  I know which salesperson to talk to for a good deal on that mobile handset.  And so on.  I have immediate and continuous information in depth, and I put that information to work, right now, to make my life better.

The world of hyperconnectivity isn’t some hypothetical place we’ll never see.  We are living in it now.  The seeds of the future are planted in the present.  But the shape of the future is determined by our actions today.  It is possible to blunt and slow Australia’s progress into this world with bad decisions and bad services.  But it is also possible to thrust the nation into global leadership if we can embrace the inevitable trend toward hyperconnectivity, and harness it.  It has already transformed our lives.  It will transform our businesses, our schools, and our government.  You are the carriers of that change.  Your actions will bring this new world into being.

What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Dense and Thick

I: The Golden Age

In October of 1993 I bought myself a used SPARCstation.  I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration.  I also had some ideas about coding networking protocols for shared virtual worlds.  Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet.  Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference.  I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there.  Not enough content to make it really interesting.  The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968.  Without sufficient content, hypertext systems are fundamentally uninteresting.  Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage.  To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive.  Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party.  The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing.  Over the course of the last week of October 1993, I visited every single one of those Websites.  Then I was done.  I had surfed the entire World Wide Web.  I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days.  Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites.  Yet even so, the Web had the capacity to suck you in.  I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly.  This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it.  At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party.  This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco.  People would come to perform, create, demonstrate, and spectate.  I decided I would show these people this new-fangled thing I’d become obsessed with.  So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?”  They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject.  They’d be delighted, and begin to explore.  At no point did I say, “This is the World Wide Web.”  Nor did I use the word ‘hypertext’.  I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution.  Was I responsible for that?  I’d like to think so, but I reckon San Francisco was a bit of a nexus.  I wasn’t the only one exploring the Web.  That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm?  How about you type in ‘www.hotwired.com’?”  Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online.  Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything.  I didn’t have to tell Steuer, and he didn’t have to tell me.  We knew.  And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire.  It was, and is, all things to all people.  This makes it the perfect love machine – nothing can confirm your prejudices better than the Web.  It also makes the Web a very pretty hate machine.  It is the reflector and amplifier of all things human.  We were completely unprepared, and for that reason the Web has utterly overwhelmed us.  There is no going back.  If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web.  Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide.  The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection.  The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference.  We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible.  But this age is drawing to a close.  Two recent developments will, in retrospect, be seen as the beginning of the end.  The first of these is the transformation of the oldest medium into the newest.  The book is coextensive with history, with the largest part of what we regard as human culture.  Until five hundred and fifty years ago, books were handwritten, rare and precious.  Moveable type made books a mass medium, and lit the spark of modernity.  But the book, unlike nearly every other medium, has resisted its own digitization.  This year the defenses of the book have been breached, and ones and zeroes are rushing in.  Over the next decade perhaps half or more of all books will ephemeralize,  disappearing into the ether, never to return to physical form.  That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate.  Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not.  The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us.  It presents the Web as a surface, nothing more.  iPad is a portal into the human universe, stripped of everything that is a computer.  It is emphatically not a computer.  Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come.  But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form.  But the human universe is not the whole universe.  We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture.  But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning.  We know this as one of the basic characteristics of humanness.  From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness.  Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world.  Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world.  This process actually provides the mechanism by which the world comes to make sense to us.  If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself.  But none of us can look at a thing and be completely innocent about its hidden meanings.  They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness.  For those of us not in such a blessed state, the material world has a subconscious component.  Everything means something.  Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific.  Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it.  It is always there, but rarely spoken of.  That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual.  You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years.  That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities.  Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision.  The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world.  At present, AR is flashy, but not at all useful.  It’s about to make a transition.  It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience.  These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit.  Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One:  The Book

Last year I read a wonderful book.  The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century.  By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago.  That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text.  When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization.  It wants to be digitized, because it has always been a message, encoded.  When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on.  You’d be able to make a well-briefed decision on whether this book is the right book for you.  Simple.  In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore.  Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth.  It’s this intension that the device has to support.  As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person.  My fragments will be sewn together with my threads, yours with your threads.  The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection.  The more connected everything becomes, the less likely we are prone to linearity.  We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext.  The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning.  The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial.  As such it is pulled in both directions at once.  It wants to be liberated, but will be utterly destroyed in that liberation.  The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese.  Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce.  Today I’d walk up to the meat case and throw a random package into my shopping trolley.  If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close.  I might also check to see how much fat is in the mince.  Or perhaps it’s grass-fed beef.  Or organically grown.  All of this information is offered up on the label placed on the package.  And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work.  Go through the supermarket to the distributor, through the distributor to the abattoir,  through the abattoir to farmer, through the farmer to the animal itself.  Was it healthy?  Where was it slaughtered?  Is that abattoir healthy?  (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.)  Was it fed lots of antibiotics in a feedlot?  Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince?  How much CO2 was created?  How much methane?  How much water was consumed?  These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet.  Without a system like this, it is essentially impossible.  With such a system it can potentially become easy.  As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince?  And its nutritional value?  I should be tracking those as well – or rather, my device should – so that I can maintain optimal health.  I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium.  Something should be keeping track of this.  Something that can watch and record and use that recording to build a model.  Something that can connect the real world of objects with the intangible set of goals that I have for myself.  Something that could do that would be exceptionally desirable.  It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves.  It’s an idea so simple it is completely self-evident.  We won’t need to convince anyone of this, to sell them on the truth of it.  They will simply ask, ‘When can I have it?’  But there’s more.  My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three:  Medicine

Four months ago, I contracted adult-onset chickenpox.  Which was just about as much fun as that sounds.  (And yes, since you’ve asked, I did have it as a child.  Go figure.)  Every few days I had doctors come by to make sure that I was surviving the viral infection.  While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high.  He suggested that I go on Micardis, a common medication for hypertension.  I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried.  Medicines are never perfect; they work for a certain large cohort of people.  For others they do nothing at all.  For a far smaller number, they might be toxic.  So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP.  He did not know my medical history.  He did not know the history of the other medications I had been taking.  All he knew was what he saw when he walked into my flat.  That could be a recipe for disaster.  Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems.  This is well known.  It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive.  Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption.  But that’s a model which is precisely backward.  While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves.  I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500.  Well within the range of your typical medical test.  Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs.  Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others.  Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive.  It is our interface to ourselves, and in that becomes an object of almost unimaginable importance.  In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible.  It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships.  It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network.  There are already bits and pieces of much of this in place.  It is a revolution waiting to happen.  That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III:  The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web.  He’s talking about the Semantic Web.”  And you’re right, I am talking about the Semantic Web.  But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another.  What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it.  That’s the vital difference which made the Web such a success in 1994 and 1995.  And it’s about to happen once again.

But we are starting from near zero.  Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat.  I can not.  I can not Google for the contents of my home.  There is no place to put that information, even if I had it, nor systems to put that information to work.  It is exactly like the Web in 1993: the lights on, but nobody home.  We have the capability to conceive of the world-as-a-database.  We have the capability to create that database.  We have systems which can put that database to work.  And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need.  But we have precious little connecting these three.  These are not businesses that exist yet.  We have not brought the real world into our conception of the Web.  That will have to change.  As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison.  There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple.  As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution.  I doubt it will be the integration of Wikipedia with a mobile camera.  It will be something much more immediate.  Much more concrete.  Much more useful.  Perhaps something concerned with health.  Or with managing your carbon footprint.  Those two seem the most obvious to me.  But the real revolution will probably come from a direction no one expects.  It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution.  There was nothing special about San Francisco back in 1993 and 1994.  But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web.  Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands.  You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects.  Or, you can wait for someone else to come along and do it.  Because someone inevitably will.  Every day, the pressure grows.  The real world is clamoring to crawl into cyberspace.  You can open the door.

Sharing Power (Aussie Rules)

I: Family Affairs

In the US state of North Carolina, the New York Times reports, an interesting experiment has been in progress since the first of February. The “Birds and Bees Text Line” invites teenagers with any questions relating to sex or the mysteries of dating to SMS their question to a phone number. That number connects these teenagers to an on-duty adult at the Adolescent Pregnancy Prevention Campaign. Within 24 hours, the teenager gets a reply to their text. The questions range from the run-of-the-mill – “When is a person not a virgin anymore?” – and the unusual – “If you have sex underwater do u need a condom?” – to the utterly heart-rending – “Hey, I’m preg and don’t know how 2 tell my parents. Can you help?”

The Birds and Bees Text Line is a response to the slow rise in the number of teenage pregnancies in North Carolina, which reached its lowest ebb in 2003. Teenagers – who are given state-mandated abstinence-only sex education in school – now have access to another resource, unmediated by teachers or parents, to prevent another generation of teenage pregnancies. Although it’s early days yet, the response to the program has been positive. Teenagers are using the Birds and Bees Text Line.

It is precisely because the Birds and Bees Text Line is unmediated by parental control that it has earned the ire of the more conservative elements in North Carolina. Bill Brooks, president of the North Carolina Family Policy Council, a conservative group, complained to the Times about the lack of oversight. “If I couldn’t control access to this service, I’d turn off the texting service. When it comes to the Internet, parents are advised to put blockers on their computer and keep it in a central place in the home. But kids can have access to this on their cell phones when they’re away from parental influence – and it can’t be controlled.”

If I’d stuffed words into a straw man’s mouth, I couldn’t have come up with a better summation of the situation we’re all in right now: young and old, rich and poor, liberal and conservative. There are certain points where it becomes particularly obvious, such as with the Birds and Bees Text Line, but this example simply amplifies our sense of the present as a very strange place, an undiscovered country that we’ve all suddenly been thrust into. Conservatives naturally react conservatively, seeking to preserve what has worked in the past; Bill Brooks speaks for a large cohort of people who feel increasingly lost in this bewildering present.

Let us assume, for a moment, that conservatism was in the ascendant (though this is clearly not the case in the United States, one could make a good argument that the Rudd Government is, in many ways, more conservative than its predecessor). Let us presume that Bill Brooks and the people for whom he speaks could have the Birds and Bees Text Line shut down. Would that, then, be the end of it? Would we have stuffed the genie back into the bottle? The answer, unquestionably, is no.

Everyone who has used or even heard of the Birds and Bees Text Line would be familiar with what it does and how it works. Once demonstrated, it becomes much easier to reproduce. It would be relatively straightforward to take the same functions performed by the Birds and Bees Text Line and “crowdsource” them, sharing the load across any number of dedicated volunteers who might, through some clever software, automate most of the tasks needed to distribute messages throughout the “cloud” of volunteers. Even if it took a small amount of money to setup and get going, that kind of money would be available from donors who feel that teenage sexual education is a worthwhile thing.

In other words, the same sort of engine which powers Wikipedia can be put to work across a number of different “platforms”. The power of sharing allows individuals to come together in great “clouds” of activity, and allows them to focus their activity around a single task. It could be an encyclopedia, or it could be providing reliable and judgment-free information about sexuality to teenagers. The form matters not at all: what matters is that it’s happening, all around us, everywhere throughout the world.

The cloud, this new thing, this is really what has Bill Brooks scared, because it is, quite literally, ‘out of control’. It arises naturally out of the human condition of ‘hyperconnection’. We are so much better connected than we were even a decade ago, and this connectivity breeds new capabilities. The first of these capabilities are the pooling and sharing of knowledge – or ‘hyperintelligence’. Consider: everyone who reads Wikipedia is potentially as smart as the smartest person who’s written an article in Wikipedia. Wikipedia has effectively banished ignorance born of want of knowledge. The Birds and Bees Text Line is another form of hyperintelligence, connecting adults with knowledge to teenagers in desperate need of that knowledge.

Hyperconnectivity also means that we can carefully watch one another, and learn from one another’s behaviors at the speed of light. This new capability – ‘hypermimesis’ – means that new behaviors, such as the Birds and Bees Text Line, can be seen and copied very quickly. Finally, hypermimesis means that that communities of interest can form around particular behaviors, ‘clouds’ of potential. These communities range from the mundane to the arcane, and they are everywhere online. But only recently have they discovered that they can translate their community into doing, putting hyperintelligence to work for the benefit of the community. This is the methodology of the Adolescent Pregnancy Prevention Campaign. This is the methodology of Wikipedia. This is the methodology of Wikileaks, which seeks to provide a safe place for whistle-blowers who want to share the goods on those who attempt to defraud or censor or suppress. This is the methodology of ANONYMOUS, which seeks to expose Scientology as a ridiculous cult. How many more examples need to be listed before we admit that the rules have changed, that the smooth functioning of power has been terrifically interrupted by these other forces, now powers in their own right?

II: Affairs of State

Don’t expect a revolution. We will not see masses of hyperconnected individuals, storming the Winter Palaces of power. This is not a proletarian revolt. It is, instead, rather more subtle and complex. The entire nature of power has changed, as have the burdens of power. Power has always carried with it the ‘burden of omniscience’ – that is, those at the top of the hierarchy have to possess a complete knowledge of everything of importance happening everywhere under their control. Where they lose grasp of that knowledge, that’s the space where coups, palace revolutions and popular revolts take place.

This new power that flows from the cloud of hyperconnectivity carries a different burden, the ‘burden of connection’. In order to maintain the cloud, and our presence within it, we are beholden to it. We must maintain each of the social relationships, each of the informational relationships, each of the knowledge relationships and each of the mimetic relationships within the cloud. Without that constant activity, the cloud dissipates, evaporating into nothing at all.

This is not a particularly new phenomenon; Dunbar’s Number demonstrates that we are beholden to the ‘tribe’ of our peers, the roughly 150 individuals who can find a place in our heads. In pre-civilization, the cloud was the tribe. Should the members of tribe interrupt the constant reinforcement of their social, informational, knowledge-based and mimetic relationships, the tribe would dissolve and disperse – as happens to a tribe when it grows beyond the confines of Dunbar’s Number.

In this hyperconnected era, we can pick and choose which of our human connections deserves reinforcement; the lines of that reinforcement shape the scope of our power. Studies of Japanese teenagers using mobiles and twenty-somethings on Facebook have shown that, most of the time, activity is directed toward a small circle of peers, perhaps six or seven others. This ‘co-presence’ is probably a modern echo of an ancient behavior, presumably related to the familial unit.

While we might desire to extend our power and capabilities through our networks of hyperconnections, the cost associated with such investments is very high. Time spent invested in a far-flung cloud is time that lost on networks closer to home. Yet individuals will nonetheless often dedicate themselves to some cause greater than themselves, despite the high price paid, drawn to some higher ideal.

The Obama campaign proved an interesting example of the price of connectivity. During the Democratic primary for the state of New York (which Hilary Clinton was expected to win easily), so many individuals contacted the campaign through its website that the campaign itself quickly became overloaded with the number of connections it was expected to maintain. By election day, the campaign staff in New York had retreated from the web, back to using mobiles. They had detached from the ‘cloud’ connectivity they used the web to foster, instead focusing their connectivity on the older model of the six or seven individuals in co-present connection. The enormous cloud of power which could have been put to work in New York lay dormant, unorganized, talking to itself through the Obama website, but effectively disconnected from the Obama campaign.

For each of us, connectivity carries a high price. For every organization which attempts to harness hyperconnectivity, the price is even higher. With very few exceptions, organizations are structured along hierarchical lines. Power flows from bottom to the top. Not only does this create the ‘burden of omniscience’ at the highest levels of the organization, it also fundamentally mismatches the flows of power in the cloud. When the hierarchy comes into contact with an energized cloud, the ‘discharge’ from the cloud to the hierarchy can completely overload the hierarchy. That’s the power of hyperconnectivity.

Another example from the Obama campaign demonstrates this power. Project Houdini was touted out by the Obama campaign as a system which would get the grassroots of the campaign to funnel their GOTV results into a centralized database, which could then be used to track down individuals who hadn’t voted, in order to offer them assistance in getting to their local polling station. The campaign grassroots received training in Project Houdini, when through a field test of the software and procedures, then waited for election day. On election day, Project Houdini lasted no more than 15 minutes before it crashed under the incredible number of empowered individuals who attempted to plug data into Project Houdini. Although months in the making, Project Houdini proved that a centralized and hierarchical system for campaign management couldn’t actually cope with the ‘cloud’ of grassroots organizers.

In the 21st century we now have two oppositional methods of organization: the hierarchy and the cloud. Each of them carry with them their own costs and their own strengths. Neither has yet proven to be wholly better than the other. One could make an argument that both have their own roles into the future, and that we’ll be spending a lot of time learning which works best in a given situation. What we have already learned is that these organizational types are mostly incompatible: unless very specific steps are taken, the cloud overpowers the hierarchy, or the hierarchy dissipates the cloud. We need to think about the interfaces that can connect one to the other. That’s the area that all organizations – and very specifically, non-profit organizations – will be working through in the coming years. Learning how to harness the power of the cloud will mark the difference between a modest success and overwhelming one. Yet working with the cloud will present organizational challenges of an unprecedented order. There is no way that any hierarchy can work with a cloud without becoming fundamentally changed by the experience.

III: Affair de Coeur

All organizations are now confronted with two utterly divergent methodologies for organizing their activities: the tower and the cloud. The tower seeks to organize everything in hierarchies, control information flows, and keep the power heading from bottom to top. The cloud isn’t formally organized, pools its information resources, and has no center of power. Despite all of its obvious weaknesses, the cloud can still transform itself into a formidable power, capable of overwhelming the tower. To push the metaphor a little further, the cloud can become a storm.

How does this happen? What is it that turns a cloud into a storm? Jimmy Wales has said that the success of any language-variant version of Wikipedia comes down to the dedicated efforts of five individuals. Once he spies those five individuals hard at work in Pashtun or Khazak or Xhosa, he knows that edition of Wikipedia will become a success. In other words, five people have to take the lead, leading everyone else in the cloud with their dedication, their selflessness, and their openness. This number probably holds true in a cloud of any sort – find five like-minded individuals, and the transformation from cloud to storm will begin.

At the end of that transformation there is still no hierarchy. There are, instead, concentric circles of involvement. At the innermost, those five or more incredibly dedicated individuals; then a larger circle of a greater number, who work with that inner five as time and opportunity allow; and so on, outward, at decreasing levels of involvement, until we reach those who simply contribute a word or a grammatical change, and have no real connection with the inner circle, except in commonality of purpose. This is the model for Wikipedia, for Wikileaks, and for ANONYMOUS. This is the cloud model, fully actualized as a storm. At this point the storm can challenge any tower.

But the storm doesn’t have things all its own way; to present a challenge to a tower is to invite the full presentation of its own power, which is very rude, very physical, and potentially very deadly. Wikipedians at work on the Farsi version of the encyclopedia face arrest and persecution by Iran’s Revolutionary Guards and religious police. Just a few weeks ago, after the contents of the Australian government’s internet blacklist was posted to Wikileaks, the German government invaded the home of the man who owns the domain name for Wikileaks in Germany. The tower still controls most of the power apparatus in the world, and that power can be used to squeeze any potential competitors.

But what happens when you try to squeeze a cloud? Effectively, nothing at all. Wikipedia has no head to decapitate. Jimmy Wales is an effective cheerleader and face for the press, but his presence isn’t strictly necessary. There are over 2000 Wikipedians who handle the day-to-day work. Locking all of them away, while possible, would only encourage further development in the cloud, as other individuals moved to fill their places. Moreover, any attempt to disrupt the cloud only makes the cloud more resilient. This has been demonstrated conclusively from the evolution of ‘darknets’, private file-sharing networks, which grew up as the legal and widely available file-sharing networks, such as Napster, were shut down by the copyright owners. Attacks on the cloud only improve the networks within the cloud, only make the leaders more dedicated, only increase the information and knowledge sharing within the cloud. Trying to disperse a storm only intensifies it.

These are not idle speculations; the tower will seek to contain the storm by any means necessary. The 21st century will increasingly look like a series of collisions between towers and storms. Each time the storm emerges triumphant, the tower will become more radical and determined in its efforts to disperse the storm, which will only result in a more energized and intensified storm. This is not a game that the tower can win by fighting. Only by opening up and adjusting itself to the structure of the cloud can the tower find any way forward.

What, then, is leadership in the cloud? It is not like leadership in the tower. It is not a position wrought from power, but authority in its other, and more primary meaning, ‘to be the master of’. Authority in the cloud is drawn from dedication, or, to use rather more precise language, love. Love is what holds the cloud together. People are attracted to the cloud because they are in love with the aim of the cloud. The cloud truly is an affair of the heart, and these affairs of the heart will be the engines that drive 21st century business, politics and community.

Author and pundit Clay Shirky has stated, “The internet is better at stopping things than starting them.” I reckon he’s wrong there: the internet is very good at starting things that stop things. But it is very good at starting things. Making the jump from an amorphous cloud of potentiality to a forceful storm requires the love of just five people. That’s not much to ask. If you can’t get that many people in love with your cause, it may not be worth pursing.

Conclusion: Managing Your Affairs

All 21st century organizations need to recognize and adapt to the power of the cloud. It’s either that or face a death of a thousand cuts, the slow ebbing of power away from hierarchically-structured organizations as newer forms of organization supplant them. But it need not be this way. It need not be an either/or choice. It could be a future of and-and-and, where both forms continue to co-exist peacefully. But that will only come to pass if hierarchies recognize the power of the cloud.

This means you.

All of you have your own hierarchical organizations – because that’s how organizations have always been run. Yet each of you are surrounded by your own clouds: community organizations (both in the real world and online), bulletin boards, blogs, and all of the other Web2.0 supports for the sharing of connectivity, information, knowledge and power. You are already halfway invested in the cloud, whether or not you realize it. And that’s also true for people you serve, your customers and clients and interest groups. You can’t simply ignore the cloud.

How then should organizations proceed?

First recommendation: do not be scared of the cloud. It might be some time before you can come to love the cloud, or even trust it, but you must at least move to a place where you are not frightened by a constituency which uses the cloud to assert its own empowerment. Reacting out of fright will only lead to an arms race, a series of escalations where the your hierarchy attempts to contain the cloud, and the cloud – which is faster, smarter and more agile than you can ever hope to be – outwits you, again and again.

Second: like likes like. If you can permute your organization so that it looks more like the cloud, you’ll have an easier time working with the cloud. Case in point: because of ‘message discipline’, only a very few people are allowed to speak for an organization. Yet, because of the exponential growth in connectivity and Web2.0 technologies, everyone in your organization has more opportunities to speak for your organization than ever before. Can you release control over message discipline, and empower your organization to speak for itself, from any point of contact? Yes, this sounds dangerous, and yes, there are some dangers involved, but the cloud wants to be spoken to authentically, and authenticity has many competing voices, not a single monolithic tone.

Third, and finally, remember that we are all involved in a growth process. The cloud of last year is not the cloud of next year. The answers that satisfied a year ago are not the same answers that will satisfy a year from now. We are all booting up very quickly into an alternative form of social organization which is only just now spreading its wings and testing its worth. Beginnings are delicate times. The future will be shaped by actions in the present. This means there are enormous opportunities to extend the capabilities of existing organizations, simply by harnessing them to the changes underway. It also means that tragedies await those who fight the tide of times too single-mindedly. Our culture has already rounded the corner, and made the transition to the cloud. It remains to be seen which of our institutions and organizations can adapt themselves, and find their way forward into sharing power.

Digital Citizenship LIVE

Keynote for the Digital Fair of the Australian College of Educators, Geelong Grammar School, 16 April 2009. The full text of the talk is here.

Digital Citizenship

Introduction: Out of Control

A spectre is haunting the classroom, the spectre of change. Nearly a century of institutional forms, initiated at the height of the Industrial Era, will change irrevocably over the next decade. The change is already well underway, but this change is not being led by teachers, administrators, parents or politicians. Coming from the ground up, the true agents of change are the students within the educational system. Within just the last five years, both power and control have swung so quickly and so completely in their favor that it’s all any of us can do to keep up. We live in an interregnum, between the shift in power and its full actualization: These wacky kids don’t yet realize how powerful they are.

This power shift does not have a single cause, nor could it be thwarted through any single change, to set the clock back to a more stable time. Instead, we are all participating in a broadly-based cultural transformation. The forces unleashed can not simply be dammed up; thus far they have swept aside every attempt to contain them. While some may be content to sit on the sidelines and wait until this cultural reorganization plays itself out, as educators you have no such luxury. Everything hits you first, and with full force. You are embedded within this change, as much so as this generation of students.

This paper outlines the basic features of this new world we are hurtling towards, pointing out the obvious rocks and shoals that we must avoid being thrown up against, collisions which could dash us to bits. It is a world where even the illusion of control has been torn away from us. A world wherein the first thing we need to recognize that what is called for in the classroom is a strategic détente, a détente based on mutual interest and respect. Without those two core qualities we have nothing, and chaos will drown all our hopes for worthwhile outcomes. These outcomes are not hard to achieve; one might say that any classroom which lacks mutual respect and interest is inevitably doomed to failure, no matter what the tenor of the times. But just now, in this time, it happens altogether more quickly.

Hence I come to the title of this talk, “Digital Citizenship”. We have given our children the Bomb, and they can – if they so choose – use it to wipe out life as we know it. Right now we sit uneasily in an era of mutually-assured destruction, all the more dangerous because these kids don’t now how fully empowered they are. They could pull the pin by accident. For this reason we must understand them, study them intently, like anthropologists doing field research with an undiscovered tribe. They are not the same as us. Unwittingly, we have changed the rules of the world for them. When the Superpowers stared each other down during the Cold War, each was comforted by the fact that each knew the other had essentially the same hopes and concerns underneath the patina of Capitalism or Communism. This time around, in this Cold War, we stare into eyes so alien they could be another form of life entirely. And this, I must repeat, is entirely our own doing. We have created the cultural preconditions for this Balance of Terror. It is up to us to create an environment that fosters respect, trust, and a new balance of powers. To do that first we must examine the nature of the tremendous changes which have fundamentally altered the way children think.

I: Primary Influences

I am a constructivist. Constructivism states (in terms that now seem fairly obvious) that children learn the rules of the world from their repeated interactions within in. Children build schema, which are then put to the test through experiment; if these experiments succeed, those schema are incorporated into ever-larger schema, but if they fail, it’s back to the drawing board to create new schema. This all seems straightforward enough – even though Einstein pronounced it, “An idea so simple only a genius could have thought of it.” That genius, Jean Piaget, remains an overarching influence across the entire field of childhood development.

At the end of the last decade I became intensely aware that the rapid technological transformations of the past generation must necessarily impact upon the world views of children. At just the time my ideas were gestating, I was privileged to attend a presentation given by Sherry Turkle, a professor at the Massachusetts Institute of Technology, and perhaps the most subtle thinker in the area of children and technology. Turkle talked about her current research, which involved a recently-released and fantastically popular children’s toy, the Furby.

For those of you who may have missed the craze, the Furby is an animatronic creature which has expressive eyes, touch sensors, and a modest capability with language. When first powered up, the Furby speaks ‘Furbish’, an artificial language which the child can decode by looking up words in a dictionary booklet included in the package. As the child interacts with the toy, the Furby’s language slowly adopts more and more English prhases. All of this is interesting enough, but more interesting, by far, is that the Furby has needs. Furby must be fed and played with. Furby must rest and sleep after a session of play. All of this gives the Furby some attributes normally associated with living things, and this gave Turkle an idea.

Constructivists had already determined that between ages four and six children learn to differentiate between animate objects, such as a pet dog, and inanimate objects, such as a doll. Since Furby showed qualities which placed it into both ontological categories, Turkle wondered whether children would class it as animate or inanimate. What she discovered during her interviews with these children astounded her. When the question was put to them of whether the Furby was animate or inanimate, the children said, “Neither.” The children intuited that the Furby resided in a new ontological class of objects, between the animate and inanimate. It’s exactly this ontological in-between-ness of Furby which causes some adults to find them “creepy”. We don’t have a convenient slot to place them into our own world views, and therefore reject them as alien. But Furby was completely natural to these children. Even the invention of a new ontological class of being-ness didn’t strain their understanding. It was, to them, simply the way the world works.

Writ large, the Furby tells the story of our entire civilization. We make much of the difference between “digital immigrants”, such as ourselves, and “digital natives”, such as these children. These kids are entirely comfortable within the digital world, having never known anything else. We casually assume that this difference is merely a quantitative facility. In fact, the difference is almost entirely qualitative. The schema upon which their world-views are based, the literal ‘rules of their world’, are completely different. Furby has an interiority hitherto only ascribed to living things, and while it may not make the full measure of a living thing, it is nevertheless somewhere on a spectrum that simply did not exist a generation ago. It is a magical object, sprinkled with the pixie dust of interactivity, come partially to life, and closer to a real-world Pinocchio than we adults would care to acknowledge.

If Furby were the only example of this transformation of the material world, we would be able to easily cope with the changes in the way children think. It was, instead, part of a leading edge of a breadth of transformation. For example, when I was growing up, LEGO bricks were simple, inanimate objects which could be assembled in an infinite arrangement of forms. Today, LEGO Mindstorms allow children to create programmable forms, using wheels and gears and belts and motors and sensors. LEGO is no longer passive, but active and capable of interacting with the child. It, too, has acquired an interiority which teaches children that at some essential level the entire material world is poised at the threshold of a transformation into the active. A child playing with LEGO Mindstorms will never see the material world as wholly inanimate; they will see it as a playground requiring little more than a few simple mechanical additions, plus a sprinkling of code, to bring it to life. Furby adds interiority to the inanimate world, but LEGO Mindstorms empowers the child with the ability to add this interiority themselves.

The most significant of these transformational innovations is one of the most recent. In 2004, Google purchased Keyhole, Inc., a company that specialized in geospatial data visualization tools. A year later Google released the first version of Google Earth, a tool which provides a desktop environment wherein the entire Earth’s surface can be browsed, at varying levels of resolution, from high Earth orbit, down to the level of storefronts, anywhere throughout the world. This tool, both free and flexible, has fomented a revolution in the teaching of geography, history and political science. No longer constrained to the archaic Mercator Projection atlas on the wall, or the static globe-as-a-ball perched on one corner of teacher’s desk, Google Earth presents Earth-as-a-snapshot.

We must step back and ask ourselves the qualitative lesson, the constructivist message of Google Earth. Certainly it removes the problem of scale; the child can see the world from any point of view, even multiple points of view simultaneously. But it also teaches them that ‘to observe is to understand’. A child can view the ever-expanding drying of southern Australia along with a data showing the rise in temperature over the past decade, all laid out across the continent. The Earth becomes a chalkboard, a spreadsheet, a presentation medium, where the thorny problems of global civilization and its discontents can be explored out in exquisite detail. In this sense, no problem, no matter how vast, no matter how global, will be seen as being beyond the reach of these children. They’ll learn this – not because of what teacher says, or what homework assignments they complete – through interaction with the technology itself.

The generation of children raised on Google Earth will graduate from secondary schools in 2017, just at the time the Government plans to complete its rollout of the National Broadband Network. I reckon these two tools will go hand-in-hand: broadband connects the home to the world, while Google Earth brings the world into the home. Australians, particularly beset by the problems of global warming, climate, and environmental management, need the best tools and the best minds to solve the problems which already beset us. Fortunately it looks as though we are training a generation for leadership, using the tools already at hand.

The existence of Google Earth as an interactive object changes the child’s relationship to the planet. A simulation of Earth is a profoundly new thing, and naturally is generating new ontological categories. Yet again, and completely by accident, we have profoundly altered the world view of this generation of children and young adults. We are doing this to ourselves: our industries turn out products and toys and games which apply the latest technological developments in a dazzling variety of ways. We give these objects to our children, more or less blindly unaware of how this will affect their development. Then we wonder how these aliens arrived in our midst, these ‘digital natives’ with their curious ways. Ladies and gentlemen, we need to admit that we have done this to ourselves. We and our technological-materialist culture have fostered an environment of such tremendous novelty and variety that we have changed the equations of childhood.

Yet these technologies are only the tip of the iceberg. Each are the technologies of childhood, of a world of objects, where the relationship is between child and object. This is not the world of adults, where the relations between objects are thoroughly confused by the relationships between adults. In fact, it can be said that for as much as adults are obsessed with material possessions, we are only obsessed with them because of our relationships to other adults. The corner we turn between childhood and young adulthood is indicative of a change in the way we think, in the objects of attention, and in the technologies which facilitate and amplify that attention. These technologies have also suddenly and profoundly changed, and, again, we are almost completely unaware of what that has done to those wacky kids.

II: Share This Thought!

Australia now has more mobile phone subscribers than people. We have reached 104% subscription levels, simply because some of us own and use more than one handset. This phenomenon has been repeated globally; there are something like four billion mobile phone subscribers throughout the world, representing approximately three point six billion customers. That’s well over half the population of planet Earth. Given that there are only about a billion people in the ‘advanced’ economies in the developed world – almost all of whom now use mobiles – two and a half billion of the relatively ‘poor’ also have mobiles. How could this be? Shouldn’t these people be spending money on food, housing, and education for their children?

As it turns out (and there are numerous examples to support this) a mobile handset is probably the most important tool someone can employ to improve their economic well-being. A farmer can call ahead to markets to find out which is paying the best price for his crop; the same goes for fishermen. Tradesmen can close deals without the hassle and lost time involved in travel; craftswomen can coordinate their creative resources with a few text messages. Each of these examples can be found in any Bangladeshi city or Africa village. In the developed world, the mobile was nice but non-essential: no one is late anymore, just delayed, because we can always phone ahead. In the parts of the world which never had wired communications, the leap into the network has been explosively potent.

The mobile is a social accelerant; it does for our innate social capabilities what the steam shovel did for our mechanical capabilities two hundred years ago. The mobile extends our social reach, and deepens our social connectivity. Nowhere is this more noticeable than in the lives of those wacky kids. At the beginning of this decade, researcher Mitzuko Ito took a look at the mobile phone in the lives of Japanese teenagers. Ito published her research in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, presenting a surprising result: these teenagers were sending and receiving a hundred text messages a day among a close-knit group of friends (generally four or five others), starting when they first arose in the morning, and going on until they fell asleep at night. This constant, gentle connectivity – which Ito named ‘co-presence’ – often consisted of little of substance, just reminders of connection.

At the time many of Ito’s readers dismissed this phenomenon as something to be found among those ‘wacky Japanese’, with their technophilic bent. A decade later this co-presence is the standard behavior for all teenagers everywhere in the developed world. An Australian teenager thinks nothing of sending and receiving a hundred text messages a day, within their own close group of friends. A parent who might dare to look at the message log on a teenager’s phone would see very little of significance and wonder why these messages needed to be sent at all. But the content doesn’t matter: connection is the significant factor.

We now know that the teenage years are when the brain ‘boots’ into its full social awareness, when children leave childhood behind to become fully participating members within the richness of human society. This process has always been painful and awkward, but just now, with the addition of the social accelerant and amplifier of the mobile, it has become almost impossibly significant. The co-present social network can help cushion the blow of rejection, or it can impel the teenager to greater acts of folly. Both sides of the technology-as-amplifier are ever-present. We have seen bullying by mobile and over YouTube or Facebook; we know how quickly the technology can overrun any of the natural instincts which might prevent us from causing damage far beyond our intention – keep this in mind, because we’ll come back to it when we discuss digital citizenship in detail.

There is another side to sociability, both far removed from this bullying behavior and intimately related to it – the desire to share. The sharing of information is an innate human behavior: since we learned to speak we’ve been talking to each other, warning each other of dangers, informing each other of opportunities, positing possibilities, and just generally reassuring each other with the sound of our voices. We’ve now extended that four-billion-fold, so that half of humanity is directly connected, one to another.

We know we say little to nothing with those we know well, though we may say it continuously. What do we say to those we know not at all? In this case we share not words but the artifacts of culture. We share a song, or a video clip, or a link, or a photograph. Each of these are just as important as words spoken, but each of these places us at a comfortable distance within the intimate act of sharing. 21st-century culture looks like a gigantic act of sharing. We share music, movies and television programmes, driving the creative industries to distraction – particularly with the younger generation, who see no need to pay for any cultural product. We share information and knowledge, creating a wealth of blogs, and resources such as Wikipedia, the universal repository of factual information about the world as it is. We share the minutiae of our lives in micro-blogging services such as Twitter, and find that, being so well connected, we can also harvest the knowledge of our networks to become ever-better informed, and ever more effective individuals. We can translate that effectiveness into action, and become potent forces for change.

Everything we do, both within and outside the classroom, must be seen through this prism of sharing. Teenagers log onto video chat services such as Skype, and do their homework together, at a distance, sharing and comparing their results. Parents offer up their kindergartener’s presentations to other parents through Twitter – and those parents respond to the offer. All of this both amplifies and undermines the classroom. The classroom has not dealt with the phenomenal transformation in the connectivity of the broader culture, and is in danger of becoming obsolesced by it.

Yet if the classroom were to wholeheartedly to embrace connectivity, what would become of it? Would it simply dissolve into a chaotic sea, or is it strong enough to chart its own course in this new world? This same question confronts every institution, of every size. It affects the classroom first simply because the networked and co-present polity of hyperconnected teenagers has reached it first. It is the first institution that must transform because the young adults who are its reason for being are the agents of that transformation. There’s no way around it, no way to set the clock back to a simpler time, unless, Amish-like, we were simply to dispose of all the gadgets which we have adopted as essential elements in our lifestyle.

This, then, is why these children hold the future of the classroom-as-institution in their hands, this is why the power-shift has been so sudden and so complete. This is why digital citizenship isn’t simply an academic interest, but a clear and present problem which must be addressed, broadly and immediately, throughout our entire educational system. We already live in a time of disconnect, where the classroom has stopped reflecting the world outside its walls. The classroom is born of an industrial mode of thinking, where hierarchy and reproducibility were the order of the day. The world outside those walls is networked and highly heterogeneous. And where the classroom touches the world outside, sparks fly; the classroom can’t handle the currents generated by the culture of connectivity and sharing. This can not go on.

When discussing digital citizenship, we must first look to ourselves. This is more than a question of learning the language and tools of the digital era, we must take the life-skills we have already gained outside the classroom and bring them within. But beyond this, we must relentlessly apply network logic to the work of our own lives. If that work is as educators, so be it. We must accept the reality of the 21st century, that, more than anything else, this is the networked era, and that this network has gifted us with new capabilities even as it presents us with new dangers. Both gifts and dangers are issues of potency; the network has made us incredibly powerful. The network is smarter, faster and more agile than the hierarchy; when the two collide – as they’re bound to, with increasing frequency – the network always wins. A text message can unleash revolution, or land a teenager in jail on charges of peddling child pornography, or spark a riot on a Sydney beach; Wikipedia can drive Britannica, a quarter millennium-old reference text out of business; a outsider candidate can get himself elected president of the United States because his team masters the logic of the network. In truth, we already live in the age of digital citizenship, but so many of us don’t know the rules, and hence, are poor citizens.

Now that we’ve explored the dimensions of the transition in the understanding of the younger generation, and the desynchronization of our own practice within the world as it exists, we can finally tackle the issue of digital citizenship. Children and young adults who have grown up in this brave new world, who have already created new ontological categories to frame it in their understanding, won’t have time or attention for preaching and screeching from the pulpit in the classroom, or the ‘bully pulpits’ of the media. In some ways, their understanding already surpasses ours, but their apprehension of consequential behavior does not. It is entirely up to us to bridge this gap in their understanding, but I do not to imply that educators can handle this task alone. All of the adult forces of the culture must be involved: parents, caretakers, educators, administrators, mentors, authority and institutional figures of all kinds. We must all be pulling in the same direction, lest the threads we are trying to weave together unravel.

III: 20/60 Foresight

While on a lecture tour last year, a Queensland teacher said something quite profound to me. “Giving a year 7 student a laptop is the equivalent of giving them a loaded gun.” Just as we wouldn’t think of giving this child a gun without extensive safety instruction, we can’t even think consider giving this child a computer – and access to the network – without extensive training in digital citizenship. But the laptop is only one device; any networked device has the potential for the same pitfalls.

Long before Sherry Turkle explored Furby’s effect on the world-view of children, she examined how children interact with computers. In her first survey, The Second Self: Computers and the Human Spirit, she applied Lacanian psychoanalysis and constructivism to build a model of how children interacted with computers. In the earliest days of the personal computer revolution, these machines were not connected to any networks, but were instead laboratories where the child could explore themselves, creating a ‘mirror’ of their own understanding.

Now that almost every computer is fully connected to the billion-plus regular users of the Internet, the mirror no longer reflects the self, but the collective yet highly heterogeneous tastes and behaviors of mankind. The opportunity for quiet self-exploration drowns amidst the clamor from a very vital human world. In the space between the singular and the collective, we must provide an opportunity for children to grow into a sense of themselves, their capabilities, and their responsibilities. This liminal moment is the space for an education in digital citizenship. It may be the only space available for such an education, before the lure of the network sets behavioral patterns in place.

Children must be raised to have a healthy respect for the network from their earliest awareness of it. The network access of young children is generally closely supervised, but, as they turn the corner into tweenage and secondary education, we need to provide another level of support, which fully briefs these rapidly maturing children on the dangers, pitfalls, opportunities and strengths of network culture. They already know how to do things, but they do not have the wisdom to decide when it appropriate to do them, and when it is appropriate to refrain. That wisdom is the core of what must be passed along. But wisdom is hard to transmit in words; it must flow from actions and lessons learned. Is it possible to develop a lesson plan which imparts the lessons of digital citizenship? Can we teach these children to tame their new powers?

Before a child is given their own mobile – something that happens around age 12 here in Australia, though that is slowly dropping – they must learn the right way to use it. Not the perfunctory ‘this is not a toy’ talk they might receive from a parent, but a more subtle and profound exploration of what it means to be directly connected to half of humanity, and how, should that connectivity go awry, it could seriously affect someone’s life – possibly even their own. Yes, the younger generation has different values where the privacy of personal information is concerned, but even they have limits they want to respect, and circles of intimacy they want to defend. Showing them how to reinforce their privacy with technology is a good place to start in any discussion of digital citizenship.

Similarly, before a child is given a computer – either at home or in school – it must be accompanied by instruction in the power of the network. A child may have a natural facility with the network without having any sense of the power of the network as an amplifier of capability. It’s that disconnect which digital citizenship must bridge.

It’s not my role to be prescriptive. I’m not going to tell you to do this or that particular thing, or outline a five-step plan to ensure that the next generation avoid ruining their lives as they come online. This is a collective problem which calls for a collective solution. Fortunately, we live in an era of collective technology. It is possible for all of us to come together and collaborate on solutions to this problem. Digital citizenship is a issue which has global reach; the UK and the US are both confronting similar issues, and both, like Australia, fail to deal with them comprehensively. Perhaps the Australian College of Educators can act as a spearhead on this issue, working in concert with other national bodies to develop a program and curriculum in digital citizenship. It would be a project worthy of your next fifty years.

In closing, let’s cast our eyes forward fifty years, to 2060, when your organization will be celebrating its hundredth anniversary. We can only imagine the technological advances of the next fifty years in the fuzziest of terms. You need only cast yourselves back fifty years to understand why. Back then, a computer as powerful as my laptop wouldn’t have filled a single building – or even a single city block. It very likely would have filled a small city, requiring its own power plant. If we have come so far in fifty years, judging where we’ll be in fifty years time is beyond the capabilities of even the most able futurist. We can only say that computers will become pervasive and nearly invisibly woven through the fabric of human culture.

Let us instead focus on how we will use technology in fifty years’ time. We can already see the shape of the future in one outstanding example – a website known as RateMyProfessors.com. Here, in a database of nine million reviews of one million teachers, lecturers and professors, students can learn which instructors bore, which grade easily, which excite the mind, and so forth. This simple site – which grew out of the power of sharing – has radically changed the balance of power on university campuses throughout the US and the UK. Students can learn from others’ mistakes or triumphs, and can repeat them. Universities, which might try to corral students into lectures with instructors who might not be exemplars of their profession, find themselves unable to fill those courses. Worse yet, bidding wars have broken out between universities seeking to fill their ranks with the instructors who receive the highest rankings.

Alongside the rise of RateMyProfessors.com, there has been an exponential increase in the amount of lecture material you can find online, whether on YouTube, or iTunes University, or any number of dedicated websites. Those lectures also have ratings, so it is already possible for a student to get to the best and most popular lectures on any subject, be it calculus or Mandarin or the medieval history of Europe.

Both of these trends are accelerating because both are backed by the power of sharing, the engine driving all of this. As we move further into the future, we’ll see the students gradually take control of the scheduling functions of the university (and probably in a large number of secondary school classes). These students will pair lecturers with courses using software to coordinate both. More and more, the educational institution will be reduced to a layer of software sitting between the student, the mentor-instructor and the courseware. As the university dissolves in the universal solvent of the network, the capacity to use the network for education increases geometrically; education will be available everywhere the network reaches. It already reaches half of humanity; in a few years it will cover three-quarters of the population of the planet. Certainly by 2060 network access will be thought of as a human right, much like food and clean water.

In 2060, Australian College of Educators may be more of an ‘Invisible College’ than anything based in rude physicality. Educators will continue to collaborate, but without much of the physical infrastructure we currently associate with educational institutions. Classrooms will self-organize and disperse organically, driven by need, proximity, or interest, and the best instructors will find themselves constantly in demand. Life-long learning will no longer be a catch-phrase, but a reality for the billions of individuals all focusing on improving their effectiveness within an ever-more-competitive global market for talent. (The same techniques employed by RateMyProfessors.com will impact all the other professions, eventually.)

There you have it. The human future is both more chaotic and more potent than we can easily imagine, even if we have examples in our present which point the way to where we are going. And if this future sounds far away, keep this in mind: today’s year 10 student will be retiring in 2060. This is their world.

Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.