Mothers of Innovation

Introduction:  Olden Days

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store.  I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system.  Prominently displayed at the front of the store, I spied my first Macintosh.  It wasn’t known as a Mac 128K or anything like that.  It was simply Macintosh.  I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse.  Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen.  But how do I get it to do anything? I wondered.  Click.  Nothing.  Click, drag – oh look some of these things changed color!  But now what?  Gah.  This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure.  Graphical computing isn’t intuitive, that’s a bit of a marketing fib.  It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment.  The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010.  Twenty-six years ago, it was a different story.  You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods.  Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers.  (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.)  We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends.  Which I did.  Every weekend.  The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh.  Presented with a perfectly white screen, you dragged the mouse along the display.  The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker.  For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation.  It was, in the vernacular of Boston, ‘wicked’.  I couldn’t stop playing with Hendrix.  I invited friends over, showed them, and they couldn’t stop playing with Hendrix.  Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for.  Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity.  I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity.  When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity.  That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G).  I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us.  So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I:  The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system.  I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death.  (And, if I know how to read a stack trace, that was probably caused by Flash.  Go figure.)  OSX is solid, it’s modestly secure, and it has plenty of eye candy.  My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words.  Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication.  This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET.  Phase one: build the network.  Phase two: connect everything to the network.  Phase three: PROFIT!

That seems to have worked out pretty much according to plan.  Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected.  All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing.  Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom.  Blame our brains.  They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies.  We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it?   Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there.  Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts.  They take chaos and organize it.  Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals.  They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds.  All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention.  We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity.  The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things?  All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them?  Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload?  Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure.  If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser.  Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on.  This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on.  I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else.  I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea.  Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last ‘social message manager’ that anyone will need.  It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging.  But I’m not coy.  I’ll come right out and say it: Plexus is an open-source project, and I need some help here.  That’s a reflection of the fact that we all need some help here.  We’re being clubbed into submission by our connectivity.  I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves.  That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve.  Finally, some time to think.  Some time to be.  Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds.  We’re too busy attending to the next message.  We need to disconnect periodically, and focus on the real.  We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others.  We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward.  It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become.  The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another.  We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent.  My unit was going on the auction block – would I mind moving out?  Immediately?  I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance.  The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird.  There were no signs posted in my building, no indication that either of the units were for rent.  I’d heard nothing from the few neighbors I know well enough to chat with.  They didn’t know either.  Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it.  Why?  Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs.  One of the pleasures of the city is its anonymity.  That’s also one of it’s great dangers.  The two go hand-in-hand.  Yet the world of 2010 does not offer up this kind of anonymity easily.  Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again.  How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to?  Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device.  It is the place where our of the human relationships reside.  (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.)  Yet the mobile is more than just the social crossroads.  It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView.  I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long.  As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be.  The mobile is my networked, location-aware sensor.  It gathers up all of the information I need to ease my path through life.  This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment.  The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack.  It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention.  After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time.  I asked her, “When is your mobile ever more than a few meters away from you?  How much difference would it make?  What do you gain by sticking it underneath your skin?”  I didn’t even bother to mention the danger from all that subcutaneous microwave radiation.  It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be.  Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential.  It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection.  We’re addicted, and the first step to treating that addiction is to admit we have  problem.  But here’s the dilemma: we’re working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers.  And I’m encouraging you to make better crack.  Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it.  It’s an accommodation we will need to make, a give-and-take.  We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there’s more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop.  I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration.  When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place.  This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole.  There’s clearly a need for it, but that’s just the beginning.  This is automated, computational decision making.  It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra.  When I arrived there, at 6 pm, I wondered where to have dinner.  Canberra is not known for its scintillating nightlife – I had no idea where to dine.  I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests?  Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters?  The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves.  All of it comes together in our hands.  The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together.  It becomes hypnotizing and formidable, though it rings less and less.  Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket.  Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live.  When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network.  Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations.  I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming.  That’s the potential that we hold in the palms of our hands.  That’s the world you can bring to life.

III:  Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions.  There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001.  But it never happened.  Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience.  It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years.  It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface.  Yet that alone was not enough.  I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated.  But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months.  That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone.  iPad is not simply a gargantuan iPod Touch.  The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience:  I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece.  I had about 48 hours to do the work.  I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them.  Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting.  Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account.  Once uploaded, I could access those files from my iPad – all thousand or so pages.  Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written.  This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate.  iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper.  That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago.  The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world.  It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone.  The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear.  So stretch your mind further than the idea of the app.  Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it.  That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there.  Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model.  That will come so that Apple can enable FaceTime.  (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.)  Once the iPad can see you, it can respond to you.  It can acknowledge your presence in an authentic manner.  We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in.  We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it.  But that’s not where they’re looking, this is not about keeping you entertained.  The iPad can entertain you, but that’s not its main design focus.  It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures.  At that point it is no longer a mirror; it is an entity on its own.  It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby.  It will become our constant companion, helping us through every situation.  And it will move seamlessly between our devices, from iPad to iPhone to desktop.  But it will begin on iPad.

Because we are just starting out with tablets, anything is possible.  We haven’t established expectations which guide us into a particular way of thinking about the device.  We’ve had mobiles for nearly twenty years, and desktops for thirty.  We understand both well, and with that understanding comes a narrowing of possibilities.  The tablet is the undiscovered country, virgin, green, waiting to be explored.  This is the desktop revolution, all over again.  This is the mobile revolution, all over again.  We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet.  I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer.  I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing.  So think big.  This is the time for it.  In another few years it will be difficult to aim for the stars.  The platform will be carrying too much baggage.  Right now we all get to be rocket scientists.  Right now we get to play, and dream, and make it all real.

Make War, then Love

At the close of the first decade of the 21st century, we find ourselves continuously connecting to one another.  This isn’t a new thing, although it may feel new.  The kit has changed – that much is obvious – but who we are has not.  Only from an understanding of who we are that we can understand the future we are hurtling toward.  Connect, connect, connect.  But why?  Why are we so driven?

To explain this – and reveal that who we are now is precisely who we have always been, I will tell you two stories.  They’re interrelated – one leads seamlessly into the other.  I’m not going to say that these stories are the God’s honest truth.  They are, as Rudyard Kipling put it, ‘just-so stories’.  If they aren’t true, the describe an arrangement of facts so believable that they could very well be true.  There is scientific evidence to support both of these stories, but neither is considered scientific canon.   So, take everything with a grain of salt; these are more fables than theories, but we have always used fables to help us illuminate the essence of our nature.

For our first story, we need to go back a long, long time.  Before the settlement of Australia – by anyone.  Before Homo Sapiens, before Australopithecus, before we broke away from the chimpanzees, five million years ago, just after we broke away from the gorillas, Ten million years ago.  How much do we know about this common ancestor, which scientists call Pierolapithecus?  Not very much.  A few bits of skeletons discovered in Spain eight years ago.  If you squint and imagine some sort of mash-up of the characteristics of humans, chimpanzees and gorillas, you might be able to get a glimmer of what they looked like.  Smaller than us, certainly, and not upright – that comes along much later.  But one thing we do know, without any evidence from skeletons: Pierolapithecus was a social animal.  How do we know this?  Each of its three descendent species – humans, chips and bonobos – are all highly social animals.  We don’t do well on our own.  In fact, on our own we tend to make a tasty meal for some sort of tiger or lion or other cat.  Together, well, that’s another matter.

Which brings us to the first ‘just-so’ story.  Imagine a warm late afternoon, hanging out in the trees in Africa’s Rift Valley.  Just you and your mates – probably ten or twenty of them.  You’re all males; the females are elsewhere, doing female-type things, which we’ll discuss presently.  At a signal from the ‘alpha male’, all of you fall into line, drop out of the trees, and begin a trek that takes you throughout the little bit of land you call your own – with your own trees and plants and bugs that keep you well fed – and you go all the way to the edge of your territory, to the border of the territory of a neighboring troupe of Pierolapithecus.  That troupe – about the same size as your own – is dozing in the heat of the afternoon, all over the place, but basically within eyeshot of one another.

Suddenly – and silently – you all cross the border.  You fan out, still silent, looking for the adolescent males in this troupe.  When you find them, you kill them.  As for the rest, you scare them off with your screams and your charges, and, at the end, they’ve lost some of their own territory – and trees and plants and delicious grubs – while you’ve got just a little bit more.  And you return, triumphant, with the bodies you’ve acquired, which you eat, with your troupe, in a victory dinner.

This all sounds horrid and nasty and mean and just not criket.  That it is.  It’s war.  How do we know that ‘war’ stretches this far back into our past?  Just last month a paper published in Current Biology and reported in THE ECONOMIST described how primatologists had seen just this behavior among chimpanzees in their natural habitats in the African rain forests.  The scene I just described isn’t ten million years old, or even ten thousand, but current.  Chimpanzees wage war.  And this kind of warfare is exactly what was commonplace in New Guinea and the upper reaches of Amazonia until relatively recently – certainly within the span of my own lifetime.  War is a behavior common to both chimpanzees and humans – so why wouldn’t it be something we inherited from our common ancestor?

War.  What’s it good for?  If you win your tiny Pierolapithecine war for a tiny bit more territory, you’ll gain all of the resources in that territory.  Which means your troupe will be that much better fed.  You’ll have stronger immune systems when you get sick, you’ll have healthier children.  And you’ll have more children.  As you acquire more resources, more of your genes will get passed along, down the generations.  Which makes you even stronger, and better able to wage your little wars.  If you’re good at war, natural selection will shine upon you.

What makes you good at war?  That’s the real question here.  You’re good at war if you and your troupe – your mates – can function effectively as a unit.  You have to be able to coordinate your activities to attack – or defend – territory.  We know that language skills don’t go back ten million years, so you’ve got to do this the old fashioned way, with gestures and grunts and the ability to get into the heads of your mates.  That’s the key skill; if you can get into your mates’ heads, you can think as a group.  The better you can do that, the better you will do in war.  The better you do in war, the more offspring you’ll have, so that skill, that ability to get into each others’ heads gets reinforced by natural selection, and becomes, over time, evolution.  The generations pass, and you get better and better at knowing what your mates are thinking.

This is the beginning of the social revolution.  All the way back here, before we looked anything like human, we grasped the heart of the matter: we must know one another to survive.  If we want to succeed, we must know each other well.  There are limits to this knowing, particularly with the small brain of Pierolapithecus.  Knowing someone well takes a lot of brain capacity, and soon that fills up.  When it does, when you can’t know everyone around you intimately.  When that happens your troupe will grow increasingly argumentative, confrontational, and eventually will break into two independent troupes.  All because of a communication breakdown.

There’s strength in numbers; if I can manage a troupe of thirty while all you can manage is twenty, I’ll defeat you in war.  So there’s pressure, year after year, to grow the troupe, and, quite literally, to stuff more mates into the space between your ears.  For a long time that doesn’t lead anywhere; then there’s a baby born with just a small genetic difference, one which allows just a bit more brain capacity, so that they can handle two or three or four more mates into its head, which makes a big difference.  Such a big difference that these genes get passed along very rapidly, and soon everyone can hold a few more mates inside their heads.  But that capability comes with a price.  Those Pierolapithecines have slightly bigger brains, and slightly bigger heads.  They need to eat more to keep those bigger brains well-fed.  And those big heads would soon prove very problematic.

This is where we cross over, from our first story, into our second.  This is where we leave the world of men behind, and enter the world of women, who have been here, all along, giving birth and gathering food and raising children and mourning the dead lost to wars, as they still do today.  As they have done for ten million years.  But somewhere in the past few million years, something changed for women, something perfectly natural became utterly dangerous.  All because of our drive to socialize.

Human birth is a very singular thing in the animal world.  Among the primates, human babies are the only ones born facing downward and away from the mother.  They’re also the only ones who seriously threaten the lives of their mothers as they come down the birth canal.  That’s because our heads are big.  Very big.  Freakishly big.  So big that one of the very recent evolutionary adaptations in Homo Sapiens is a pelvic gap in women that creates a larger birth canal, at the expense of their ability to walk.  Women walk differently from men – much less efficiently – because they give birth to such large-brained children.

There’s two notable side-effects of this big-brained-ness.  The first is well-known: women used to regularly die in childbirth.  Until the first years of the 20th century, about one in one hundred pregnancies ended with the death of the mother.  That’s an extraordinarily high rate, particularly given that a women might give birth to seven or eight children over their lifetime.  Now that we have survivable caesarian sections and all sorts of other medical interventions, death in childbirth is much rarer – perhaps 1 in 10,000 births.  Nowhere else among the mammals can you find this kind of danger surrounding the delivery of offspring.  This is the real high price we pay for being big-brained: we very nearly kill our mothers.

The second side-effect is less well-known, but so pervasive we simply accept it as a part of reality: humans need other humans to assist in childbirth.  This isn’t true for any other mammal species – or any other species, period.  But there are very few (one or two) examples of cultures where women give childbirth by themselves.  Until the 20th century medicalization of pregnancy and childbirth, this was ‘women’s work’, and a thriving culture of midwives managed the hard work of delivery.  (The image of the chain-smoking father, waiting outside the maternity ward for news of his newborn child is far older than the 20th century.)

For at least a few hundred thousand years – and probably a great deal longer than that – the act of childbirth has been intensely social.  Women come together to help their sisters, cousins, and daughters pass through the dangers and into motherhood.  If you can’t rally your sisters together when you need them, childbirth will be a lonely and possibly lethal experience.  So this is what it means to be human: we entered the world because of the social capabilities of our mothers.  Women who had strong social capabilities, who could bring her sisters to her aid, would have an easier time in childbirth, and would be more likely to live through childbirth, as would their children.

After the child has been born, mothers need even more help from their female peers; in the first few hours, when the mother is weak, other women must provide food and shelter.  As that child grows, the mother will periodically need help with childcare, particularly if she’s just been delivered of another child.  Mothers who can use their social capabilities to deliver these resources will thrive.  Their children will thrive.  This means that these capabilities tended to be passed down, through the generations.  Just as men had their social skills honed by generations upon generations of warfare, women had their social skills sharpened by generations upon generations of childbirth and child raising.

All of this sounds very much as though it’s Not Politically Correct.  But our liberation from our biologically determined sex roles is a very recent thing.  Men raise children while women go to war.  Yet behind this lies hundreds of thousands of generations of our ancestors who did use these skills along gender-specific lines.  That’s left a mark; men tend to favor coordination in groups – whether that’s a war or a footy match – while women tend to concentrate on building and maintaining a closely-linked web of social connections. Women seem to have a far greater sensitivity to these social connections than men do, but men can work together in a team – to slaughter the opponent (on the battlefield or the pitch).

The prefrontal cortex – freakishly large in human beings when compared to chimpanzees – seems to be where the magic happens, where we keep these models of one another.  Socialization has limits, because our brains can’t effectively grow much bigger.  They already nearly kill our mothers, they consume about 25% of the food we eat, and they’re not even done growing until five years after we’re born – leaving us defenseless and helpless far longer than any other mammals.  That’s another price we pay for being so social.

But we’re maxed out.  We’ve reached the point of diminishing returns.  If our heads get any bigger, there won’t be any mothers left living to raise us.  So here we are.  An estimate conducted nearly 20 years ago pegs the number of people who can fit into your head at roughly 148, plus or minus a few.  That’s not very many.  But for countless thousands of years, that was as big as a tribe or a village ever grew.  That was the number of people you could know well, and that set the upper boundary on human sociability.

And then, ten thousand years ago, the comfortable steady-state of human development blew apart.  Two things happened nearly simultaneously; we learned to plant crops, which created larger food supplies, which meant families could raise more children.  We also began to live together in communities much larger than the tribe or village.  The first cities – like Jericho – date from around that time, cities with thousands of people in them.

This is where we cross a gap in human culture, a real line that separates that-which-has-come-before to that-which-comes-after.  Everyone who has moved from a small town or village to the big city knows what it’s like to cross that line.  People have been crossing that line for a hundred centuries.  On one side of the line people are connected by bonds that are biological, ancient and customary – you do things because they’ve always been done that way.  On the other side, people are bound by bonds that are cultural, modern, and legal.  When we can’t know everyone around us, we need laws to protect us, a culture to guide us, and all of this is very new.   Still. Ten thousand years of laws and culture, next to almost two hundred thousand years of custom – and that’s just Homo Sapiens.  Custom extends back, probably all the way to Pierolapithecus.

We wage a constant war within ourselves.  Our oldest parts want to be clannish, insular, and intensely xenophobic.  That’s what we’re adapted to.  That’s what natural selection fitted us for.  The newest parts of us realize real benefits from accumulations of humanity to big to get our heads around.  The division of labor associated with cities allows for intensive human productivity, hence larger and more successful human populations.  The city is the real hub of human progress; more than any technology, it is our ability to congregate together in vast numbers that has propelled us into modernity.

There’s an intense contradiction here: we got to the point where we were able to build cities because we were so socially successful, but cities thwarted that essential sociability.  It’s as though we went as far as we could, in our own heads, then leapt outside of them, into cities, and left our heads behind.  Our cities are anonymous places, and consequently fraught with dangers.

It’s a danger we seem prepared to accept.  In 2008 the UN reported that, for the first time in human history, over half of humanity lived in cities.  Half of us had crossed the gap between the social world in our heads and the anonymous and atomized worlds of Mumbai and Chongquing and Mexico City and Cairo and Saõ Paulo.  But just in this same moment, at very nearly the same time that half of us resided in cities, half of us also had mobiles.  Well more than half of us do now.  In the anonymity of the world’s cities, we stare down into our screens, and find within them a connection we had almost forgotten.  It touches something so ancient – and so long ignored – that the mobile now contends with the real world as the defining axis of social orientation.

People are often too busy responding to messages to focus on those in their immediate presence.  It seems ridiculous, thoughtless and pointless, but the device has opened a passage which allows us to retrieve this oldest part of ourselves, and we’re reluctant to let that go.

Which brings us to the present moment.

Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.

Blue Skies

I: Cloud People

I want to open this afternoon’s talk with a story about my friend Kate Carruthers.  Kate is a business strategist, currently working at Hyro, over in Surry Hills.  In November, while on a business trip to Far North Queensland, Kate pulled out her American Express credit card to pay for a taxi fare.  Her card was declined.  Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more important – and found her card declined once again.

As it turned out, American Express had cut Kate’s credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post.  So here’s Kate, far away from home, with a crook credit card.  Thank goodness she had another card with her, or it could have been quite a problem.  When she contacted American Express to discuss that credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division.  That, for Kate, was the last straw.  She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

One after another, Kate sent this stream of messages out to her Twitter followers.  All of her Twitter followers.  Kate’s been on Twitter for a long time – well over three years – and she’s accumulated a lot of followers.  Currently, she has over 8300 followers, although at the time she had her American Express meltdown, the number was closer to 7500.

Let’s step back and examine this for a moment.  Kate is, in most respects, a perfectly ordinary (though whip-smart) human being.  Yet she now has this ‘cloud’ of connections, all around her, all the time, through Twitter.  These 8300 people are at least vaguely aware of whatever she chooses to share in her tweets.  They care enough to listen, even if they are not always listening very closely.  A smaller number of individuals (perhaps a few hundred, people like me) listen more closely.  Nearly all the time we’re near a computer or a mobile, we keep an eye on Kate.  (Not that she needs it.  She’s thoroughly grown up.  But if she ever got into a spot of trouble or needed a bit of help, we’d be on it immediately.)

This kind of connectivity is unprecedented in human history.  We came from villages where perhaps a hundred of us lived close enough together that there were no secrets.  We moved to cities where the power of numbers gave us all a degree of anonymity, but atomized us into disconnected individuals, lacking the social support of a community.  Now we come full circle.  This is the realization of the ‘Global Village’ that Marshall McLuhan talked about fifty years ago.  At the time McLuhan though of television as a retribalizing force.  It wasn’t.  But Facebook and Twitter and the mobiles each of us carry with us during all our waking hours?  These are the new retribalizing forces, because they keep us continuously connected with one another, allowing us to manage connections in every-greater numbers.

Anything Kate says, no matter how mundane, is now widely known.  But it’s more than that.  Twitter is text, but it is also links that can point to images, or videos, or songs, or whatever you can digitize and upload to the Web.  Kate need simply drop a URL into a tweet and suddenly nearly ten thousand people are aware of it.  If they like it, they will send it along (‘re-tweet’ is the technical term), and it will spread out quickly, like waves on a pond.

But Twitter isn’t a one-way street.  Kate is ‘following’ 7250 individuals; that is, she’s receiving tweets from them.  That sounds like a nearly impossible task: how can you pay attention to what that many people have to say?  It’d be like trying to listen to every conversation at Central Station (or Flinders Street Station) at peak hour.  Madness.  And yet, it is possible.  Tools have been created that allow you to keep a pulse on the madness, to stick a toe into the raging torrent of commentary.

Why would you want to do this?  It’s not something that you need to do (or even want to do) all the time, but there are particular moments – crisis times – when Twitter becomes something else altogether.  After an earthquake or other great natural disaster, after some pivotal (or trivial) political event, after some stunning discovery.  The 5650 people I follow are my connection to all of that.  My connection is broad enough that someone, somewhere in my network is nearly always nearly the first to know something, among the first to share what they know.  Which means that I too, if I am paying attention, am among the first to know.

Businesses have been built on this kind of access.  An entire sector of the financial services industry, from DowJones to Bloomberg, has thrived because it provides subscribers with information before others have it – information that can be used on a trading floor.  This kind of information freely comes to the very well-connected.  This kind of information can be put to work to make you more successful as an individual, in your business, or in whatever hobbies you might pursue.  And it’s always there.  All you need do is plug into it.

When you do plug into it, once you’ve gotten over the initial confusion, and you’ve dedicated the proper time and tending to your network, so that it grows organically and enthusiastically, you will find yourself with something amazingly flexible and powerful.  Case in point: in December I found myself in Canberra for a few days.  Where to eat dinner in a town that shuts down at 5 pm?  I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia.  A few days later, in the Barossa, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed.  These may seem like trivial instances – though they’re the difference between a good holiday and a lackluster one – but what they demonstrate is that Twitter has allowed me to plug into all of the expertise of all of the thousands of people I am connected to.  Human brainpower, multiplied by 5650 makes me smarter, faster, and much, much more effective.  Why would I want to live any other way?  Twitter can be inane, it can be annoying, it can be profane and confusing and chaotic, but I can’t imagine life without it, just as I can’t imagine life without the Web or without my mobile.  The idea that I am continuously connected and listening to a vast number of other people – even as they listen to me – has gone from shocking to comfortable in just over three years.

Kate and I are just the leading edge.  Where we have gone, all of the rest of you will soon follow.  We are all building up our networks, one person at a time.  A child born in 2010 will spend their lifetime building up a social network.  They’ll never lose track of any individual they meet and establish a connection with.  That connection will persist unless purposely destroyed.  Think of the number of people you meet throughout your lives, who you establish some connection with, even if only for a few hours.  That number would easily reach into the thousands for every one of us.  Kate and I are not freaks, we’re simply using the bleeding edge of a technology that will be almost invisible and not really worth mentioning by 2020.

All of this means that the network is even more alluring than it was a few years ago, and will become ever more alluring with the explosive growth in social networks.  We are just at the beginning of learning how to use these new social networks.  First we kept track of friends and family.  Then we moved on to business associates.  Now we’re using them to learn, to train ourselves and train others, to explore, to explain, to help and to ask for help.  They are becoming a new social fabric which will knit us together into an unfamiliar closeness.  This is already creating some interesting frictions for us.  We like being connected, but we also treasure the moments when we disconnect, when we can’t be reached, when our time and our thoughts are our own.  We preach focus to our children, but find our time and attention increasing divided by devices that demand service: email, Web, phone calls, texts, Twitter, Facebook, all of it brand new, and all of it seemingly so important that if we ignore any of them we immediately feel the cost.  I love getting away from it all.  I hate the backlog of email that greets me when I return.  Connecting comes with a cost.  But it’s becoming increasingly impossible to imagine life without it.

II: Eyjafjallajökull

I recently read a most interesting blog postChase Saunders, a software architect and entrepreneur in Maine (not too far from where I was born) had a bit of a brainwave and decided to share it with the rest of the world.  But you may not like it.  Saunders begins with: “For me to get really mad at a company, it takes more than a lousy product or service: it’s the powerlessness I feel when customer service won’t even try to make things right.  This happens to me about once a year.”  Given the number of businesses we all interact with in any given year – both as consumers and as client businesses – this figure is far from unusual.  There will be times when we get poor value for money, or poor service, or a poor response time, or what have you.  The world is a cruel place.  It’s what happens after that cruelty which is important: how does the business deal with an upset customer?  If they fail the upset customer, that’s when problems can really get out of control.

In times past, an upset customer could cancel their account, taking their business elsewhere.  Bad, but recoverable.  These days, however, customers have more capability, precisely because of their connectivity.  And this is where things start to go decidedly pear-shaped.  Saunders gets to the core of his idea:

Let’s say you buy a defective part from ACME Widgets, Inc. and they refuse to refund or replace it.  You’re mad, and you want the world to know about this awful widget.  So you pop over to AdRevenge and you pay them a small amount. Say $3.  If the company is handing out bad widgets, maybe some other people have already done this… we’ll suppose that before you got there, one guy donated $1 and another lady also donated $1.  So now we have 3 people who have paid a total of $5 to warn other potential customers about this sketchy company…the 3 vengeful donations will go to the purchase of negative search engine advertising.  The ads are automatically booked and purchased by the website…

And there it is.  Your customers – your angry customers – have found an effective way to band together and warn every other potential customer just how badly you suck, and will do it every time your name gets typed into a search engine box.  And they’ll do it whether or not their complaints are justified.  In fact, your competitors could even game the system, stuffing it up with lots of false complaints.  It will quickly become complete, ugly chaos.

You’re probably all donning your legal hats, and thinking about words like ‘libel’ and ‘defamation’.  Put all of that out of your mind.  The Internet is extraterritorial, it and effectively ungovernable, despite all of the neat attempts of governments from China to Iran to Australia to stuff it back into some sort of box.  Ban AdRevenge somewhere, it pops up somewhere else – just as long as there’s a demand for it.  Other countries – perhaps Iceland or Sweden, and certainly the United States – don’t have the same libel laws as Australia, yet their bits freely enter the nation over the Internet.  There is no way to stop AdRevenge or something very much like AdRevenge from happening.  No way at all.  Resign yourself to this, and embrace it, because until you do you won’t be able to move on, into a new type of relationship with your customers.

Which brings us back to our beginning, and a very angry Kate Carruthers.  Here she is, on a Friday night in Far North Queensland, spilling quite a bit of bile out onto Twitter.  Everyone one of the 7500 people who read her tweets will bear her experience in mind the next time they decide whether they will do any business with American Express.  This is damage, probably great damage to the reputation of American Express, damage that could have been avoided, or at least remediated before Kate ‘went nuclear’.

But where was American Express when all of this was going on?  While Kate expressed her extreme dissatisfaction with American Express, its own marketing arm was busily cooking up a scheme to harness Twitter.  It’s Open Forum Pulse website shows you tweets from small businesses around the world.  Ironic, isn’t it? American Express builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it.  So the fire rages, uncontrolled, while American Express fiddles.

There are other examples.  On Twitter, one of my friends lauded the new VAustralia Premium Economy service to the skies, while VAustralia ran some silly marketing campaign that had four blokes sending three thousand tweets over two days in Los Angeles.  Sure, I want to tune into that stream of dreck and drivel.  That’s exactly what I’m looking for in the age of information overload: more crap.

This is it, the fundamental disconnect, the very heart of the matter.  We all need to do a whole lot less talking, and a whole lot more listening.  That’s true for each of us as individuals: we’re so well-connected now that by the time we do grow into a few thousand connections we’d be wiser listening than speaking, most of the time.  But this is particularly true for businesses, which make their living dealing with customers.  The relationship between businesses and their customers has historically been characterized by a ‘throw it over the wall’ attitude.  There is no wall, anywhere.  The customer is sitting right beside you, with a megaphone pointed squarely into your ear.

If we were military planners, we’d call this ‘asymmetric warfare’.  Instead, we should just give it the name it rightfully deserves: 21st-century business.  It’s a battlefield out there, but if you come prepared for a 20th-century conflict – massive armies and big guns – you’ll be overrun by the fleet-footed and omnipresent guerilla warfare your customers will wage against you – if you don’t listen to them.  Like volcanic ash, it may not present a solid wall to prevent your progress.  But it will jam up your engines, and stop you from getting off the ground.

Listening is not a job.  There will be no ‘Chief Listening Officer’, charged with keeping their ear down to the ground, wondering if the natives are becoming restless, ready to sound the alarm when a situation threatens to go nuclear.  There is simply too much to listen to, happening everywhere, all at once.  Any single point which presumed to do the listening for an entire organization – whether an individual or a department – will simply be overwhelmed, drowning in the flow of data.  Listening is not a job: it is an attitude.  Every employee from the most recently hired through to the Chief Executive must learn to listen.  Listen to what is being said internally (therein lies the path to true business success) and learn to listen to what others, outside the boundaries of the organization, are saying about you.

Employees already regularly check into their various social networks.  Right now we think of that as ‘slacking off’, not something that we classify as work.  But if we stretch the definition just a bit, and begin to recognize that the organization we work for is, itself, part of our social network, things become clearer.  Someone can legitimately spend time on Facebook, looking for and responding to issues as they arise.  Someone can be plugged into Twitter, giving it continuous partial attention all day long, monitoring and soothing customer relationships.  And not just someone.  Everyone.  This is a shared responsibility.  Working for the organization means being involved with and connected to the organization’s customers, past, present and future.  Without that connection, problems will inevitably arise, will inevitably amplify, will inevitably result in ‘nuclear events’.  Any organization (or government, or religion) can only withstand so many nuclear events before it begins to disintegrate.  So this isn’t a matter of choice.  This is a basic defensive posture.  An insurance policy, of sorts, protecting you against those you have no choice but to do business with.

Yet this is not all about defense.  Listening creates opportunity.  I get some of my best ideas – such as that AdRevenge article – because I am constantly listening to others’ good ideas.  Your customers might grumble, but they also praise you for a job well done.  That positive relationship should be honored – and reinforced.  As you reinforce the positive, you create a virtuous cycle of interactions which becomes terrifically difficult to disrupt.  When that’s gone on long enough, and broadly enough, you have effectively raised up your own army – in the post-modern, guerilla sense of the word – who will go out there and fight for you and your brand when the haters and trolls and chaos-makers bear down upon you.  These people are connected to you, and will connect to one another because of the passion they share around your products and your business.  This is another network, an important network, an offensive network, and you need both defensive and offensive strategies to succeed on this playing field.

Just as we as individuals are growing into hyperconnectivity, so our businesses must inevitably follow.  Hyperconnected individuals working with disconnected businesses is a perfect recipe for confusion and disaster.  Like must meet with like before the real business of the 21st-century can begin.

III: Services With a Smile

Moving from the abstract to the concrete, let’s consider the types of products and services required in our densely hyperconnected world.  First and foremost, we are growing into a pressing, almost fanatical need for continuous connectivity.  Wherever we are – even in airplanes – we must be connected.  The quality of that connection – its speed, reliability, and cost – are important co-factors to consider, and it is not always the cheapest connection which serves the customer best.  I pay a premium for my broadband connection because I can send the CEO of my ISP a text any time my link goes down – and my trouble tickets are sorted very rapidly!  Conversely, I went with a lower-cost carrier for my mobile service, and I am paying the price, with missed calls, failed data connections, and crashes on my iPhone.

As connectivity becomes more important, reliability crowds out other factors.  You can offer a premium quality service at a premium price and people will adopt it, for the same reason they will pay more for a reliable car, or for electricity from a reliable supplier, or for food that they’re sure will be wholesome.  Connectivity has become too vital to threaten.  This means there’s room for healthy competition, as providers offer different levels of service at different price points, competing on quality, so that everyone gets the level of service they can afford.  But uptime always will be paramount.

What service, exactly is on offer?  Connectivity comes in at least two flavors: mobile and broadband.  These are not mutually exclusive.  When we’re stationary we use broadband; when we’re in motion we use mobile services.  The transition between these two networks should be invisible and seamless as possible – as pioneered by Apple’s iPhone.

At home, in the office, at the café or library, in fact, in almost any structure, customers should have access to wireless broadband.  This is one area where Australia noticeably trails the rest of the world.  The tariff structure for Internet traffic has led Australians to be unusually conservative with their bits, because there is a specific cost incurred for each bit sent or received.  While this means that ISPs should always have the funding to build out their networks to handle increases in capacity, it has also meant that users protect their networks from use in order to keep costs down.  This fundamental dilemma has subjected wireless broadband in Australia to a subtle strangulation.  We do not have the ubiquitous free wireless access that many other countries – in particular, the United States – have on offer, and this consequently alters our imagination of the possibilities for ubiquitous networking.

Tariffs are now low enough that customers ought to be encouraged to offer wireless networking to the broader public.  There are some security concerns that need to be addressed to make this safe for all parties, but these are easily dealt with.  There is no fundamental barrier to pervasive wireless broadband.  It does not compete with mobile data services.  Rather, as wireless broadband becomes more ubiquitous, people come to rely on continuous connectivity ever more.  Mobile data demand will grow in lockstep as more wireless broadband is offered.  Investment in wireless broadband is the best way to ensure that mobile data services continue to grow.

Mobile data services are best characterized principally by speed and availability.  Beyond a certain point – perhaps a megabit per second – speed is not an overwhelming lure on a mobile handset.  It’s nice but not necessary.  At that point, it’s much more about provisioning: how will my carrier handle peak hour in Flinders Street Station (or Central Station)?  Will my calls drop?  Will I be able to access my cloud-based calendar so that I can grab a map and a phone number to make dinner reservations?  If a customer finds themselves continually frustrated in these activities, one of two things will happen: either the mobile will go back into the pocket, more or less permanently, or the customer will change carriers.  Since the customer’s family, friends and business associates will not be putting their own mobiles back into their pockets, it is unlikely that any customer will do so for any length of time, irrespective of the quality of their mobile service.  If the carrier will not provision, the customers must go elsewhere.

Provisioning is expensive.  But it is also the only sure way to retain your customers.  A customer will put up with poor customer service if they know they have reliable service.  A customer will put up with a higher monthly spend if they have a service they know they can depend upon in all circumstances.  And a customer will quickly leave a carrier who can not be relied upon.  I’ve learned that lesson myself.  Expect it to be repeated, millions of times over, in the years to come, as carriers, regrettably and avoidably, find that their provisioning is inadequate to support their customers.

Wireless is wonderful, and we think of it as a maintenance-free technology, at least from the customer’s point of view.  Yet this is rarely so.  Last month I listened to a talk by Genevieve Bell, Intel Fellow and Lead Anthropologist at the chipmaker.  Her job is to spend time in the field – across Europe and the developing world – observing  how people really use technology when it escapes into the wild.  Several years ago she spent some time in Singapore, studying how pervasive wireless broadband works in the dense urban landscape of the city-state.  In any of Singapore’s apartment towers – which are everywhere – nearly everyone has access to very high speed wired broadband (perhaps 50 megabits per second) – which is then connected to a wireless router to distribute the broadband throughout the apartment.  But wireless is no great respecter of walls.  Even in my own flat in Surry Hills I can see nine wireless networks from my laptop, including my own.  In a Singapore tower block, the number is probably nearer to twenty or thirty.

Genevieve visited a family who had recently purchased a wireless printer.  They were dissatisfied with it, pronouncing it ‘possessed’.  What do you mean? she inquired.  Well, they explained, it doesn’t print what they tell it to print.  But it does print other things.  Things they never asked for.  The family called for a grandfather to come over and practice his arts of feng shui, hoping to rid the printer of its evil spirits.  The printer, now repositioned to a more auspicious spot, still misbehaved.  A few days later, a knock came on the door.  Outside stood a neighbor, a sheaf of paper in his hands, saying, “I believe these are yours…?”

The neighbor had also recently purchased a wireless printer, and it seems that these two printers had automatically registered themselves on each other’s networks.  Automatic configuration makes wireless networks a pleasure to use, but it also makes for botched configurations and flaky communication.  Most of this is so far outside the skill set of the average consumer that these problems will never be properly remedied.  The customer might make a support call, and maybe – just maybe the problem will be solved.  Or, the problem will persist, and the customer will simply give up.  Even with a support call, wireless networks are often so complex that the problem can’t be wholly solved.

As wireless networks grow more pervasive, Genevieve Bell recommends that providers offer a high-quality hand-holding and diagnostic service to their customers.  They need to offer a ‘tune up’ service that will travel to the customer once a year to make sure everything is running well.  Consumers need to be educated that wireless networks do not come for free.  Like anything else, they require maintenance, and the consumer should come to expect that it will cost them something, every year, to keep it all up and running.  In this, a wireless network is no different than a swimming pool or a lawn.  There is a future for this kind of service: if you don’t offer it, your competitors soon will.

Finally, let me close with what the world looks like when all of these services are working perfectly.  Lately, I’ve become a big fan of Foursquare, a ‘location-based social network’.  Using the GPS on my iPhone, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else.  Once I’ve checked in, I can make a recommendation – a ‘tip’ in Foursquare lingo – or simply look through the tips provided by those who have been there before me.  This list of tips is quickly growing longer, more substantial, and more useful.  I can walk into a bar that I’ve never been to before and know exactly which cocktail I want to order.  I know which table at the restaurant offers the quietest corner for a romantic date.  I know which salesperson to talk to for a good deal on that mobile handset.  And so on.  I have immediate and continuous information in depth, and I put that information to work, right now, to make my life better.

The world of hyperconnectivity isn’t some hypothetical place we’ll never see.  We are living in it now.  The seeds of the future are planted in the present.  But the shape of the future is determined by our actions today.  It is possible to blunt and slow Australia’s progress into this world with bad decisions and bad services.  But it is also possible to thrust the nation into global leadership if we can embrace the inevitable trend toward hyperconnectivity, and harness it.  It has already transformed our lives.  It will transform our businesses, our schools, and our government.  You are the carriers of that change.  Your actions will bring this new world into being.

What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Dense and Thick

I: The Golden Age

In October of 1993 I bought myself a used SPARCstation.  I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration.  I also had some ideas about coding networking protocols for shared virtual worlds.  Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet.  Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference.  I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there.  Not enough content to make it really interesting.  The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968.  Without sufficient content, hypertext systems are fundamentally uninteresting.  Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage.  To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive.  Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party.  The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing.  Over the course of the last week of October 1993, I visited every single one of those Websites.  Then I was done.  I had surfed the entire World Wide Web.  I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days.  Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites.  Yet even so, the Web had the capacity to suck you in.  I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly.  This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it.  At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party.  This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco.  People would come to perform, create, demonstrate, and spectate.  I decided I would show these people this new-fangled thing I’d become obsessed with.  So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?”  They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject.  They’d be delighted, and begin to explore.  At no point did I say, “This is the World Wide Web.”  Nor did I use the word ‘hypertext’.  I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution.  Was I responsible for that?  I’d like to think so, but I reckon San Francisco was a bit of a nexus.  I wasn’t the only one exploring the Web.  That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm?  How about you type in ‘www.hotwired.com’?”  Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online.  Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything.  I didn’t have to tell Steuer, and he didn’t have to tell me.  We knew.  And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire.  It was, and is, all things to all people.  This makes it the perfect love machine – nothing can confirm your prejudices better than the Web.  It also makes the Web a very pretty hate machine.  It is the reflector and amplifier of all things human.  We were completely unprepared, and for that reason the Web has utterly overwhelmed us.  There is no going back.  If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web.  Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide.  The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection.  The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference.  We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible.  But this age is drawing to a close.  Two recent developments will, in retrospect, be seen as the beginning of the end.  The first of these is the transformation of the oldest medium into the newest.  The book is coextensive with history, with the largest part of what we regard as human culture.  Until five hundred and fifty years ago, books were handwritten, rare and precious.  Moveable type made books a mass medium, and lit the spark of modernity.  But the book, unlike nearly every other medium, has resisted its own digitization.  This year the defenses of the book have been breached, and ones and zeroes are rushing in.  Over the next decade perhaps half or more of all books will ephemeralize,  disappearing into the ether, never to return to physical form.  That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate.  Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not.  The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us.  It presents the Web as a surface, nothing more.  iPad is a portal into the human universe, stripped of everything that is a computer.  It is emphatically not a computer.  Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come.  But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form.  But the human universe is not the whole universe.  We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture.  But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning.  We know this as one of the basic characteristics of humanness.  From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness.  Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world.  Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world.  This process actually provides the mechanism by which the world comes to make sense to us.  If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself.  But none of us can look at a thing and be completely innocent about its hidden meanings.  They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness.  For those of us not in such a blessed state, the material world has a subconscious component.  Everything means something.  Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific.  Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it.  It is always there, but rarely spoken of.  That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual.  You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years.  That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities.  Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision.  The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world.  At present, AR is flashy, but not at all useful.  It’s about to make a transition.  It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience.  These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit.  Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One:  The Book

Last year I read a wonderful book.  The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century.  By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago.  That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text.  When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization.  It wants to be digitized, because it has always been a message, encoded.  When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on.  You’d be able to make a well-briefed decision on whether this book is the right book for you.  Simple.  In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore.  Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth.  It’s this intension that the device has to support.  As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person.  My fragments will be sewn together with my threads, yours with your threads.  The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection.  The more connected everything becomes, the less likely we are prone to linearity.  We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext.  The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning.  The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial.  As such it is pulled in both directions at once.  It wants to be liberated, but will be utterly destroyed in that liberation.  The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese.  Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce.  Today I’d walk up to the meat case and throw a random package into my shopping trolley.  If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close.  I might also check to see how much fat is in the mince.  Or perhaps it’s grass-fed beef.  Or organically grown.  All of this information is offered up on the label placed on the package.  And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work.  Go through the supermarket to the distributor, through the distributor to the abattoir,  through the abattoir to farmer, through the farmer to the animal itself.  Was it healthy?  Where was it slaughtered?  Is that abattoir healthy?  (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.)  Was it fed lots of antibiotics in a feedlot?  Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince?  How much CO2 was created?  How much methane?  How much water was consumed?  These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet.  Without a system like this, it is essentially impossible.  With such a system it can potentially become easy.  As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince?  And its nutritional value?  I should be tracking those as well – or rather, my device should – so that I can maintain optimal health.  I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium.  Something should be keeping track of this.  Something that can watch and record and use that recording to build a model.  Something that can connect the real world of objects with the intangible set of goals that I have for myself.  Something that could do that would be exceptionally desirable.  It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves.  It’s an idea so simple it is completely self-evident.  We won’t need to convince anyone of this, to sell them on the truth of it.  They will simply ask, ‘When can I have it?’  But there’s more.  My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three:  Medicine

Four months ago, I contracted adult-onset chickenpox.  Which was just about as much fun as that sounds.  (And yes, since you’ve asked, I did have it as a child.  Go figure.)  Every few days I had doctors come by to make sure that I was surviving the viral infection.  While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high.  He suggested that I go on Micardis, a common medication for hypertension.  I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried.  Medicines are never perfect; they work for a certain large cohort of people.  For others they do nothing at all.  For a far smaller number, they might be toxic.  So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP.  He did not know my medical history.  He did not know the history of the other medications I had been taking.  All he knew was what he saw when he walked into my flat.  That could be a recipe for disaster.  Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems.  This is well known.  It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive.  Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption.  But that’s a model which is precisely backward.  While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves.  I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500.  Well within the range of your typical medical test.  Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs.  Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others.  Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive.  It is our interface to ourselves, and in that becomes an object of almost unimaginable importance.  In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible.  It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships.  It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network.  There are already bits and pieces of much of this in place.  It is a revolution waiting to happen.  That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III:  The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web.  He’s talking about the Semantic Web.”  And you’re right, I am talking about the Semantic Web.  But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another.  What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it.  That’s the vital difference which made the Web such a success in 1994 and 1995.  And it’s about to happen once again.

But we are starting from near zero.  Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat.  I can not.  I can not Google for the contents of my home.  There is no place to put that information, even if I had it, nor systems to put that information to work.  It is exactly like the Web in 1993: the lights on, but nobody home.  We have the capability to conceive of the world-as-a-database.  We have the capability to create that database.  We have systems which can put that database to work.  And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need.  But we have precious little connecting these three.  These are not businesses that exist yet.  We have not brought the real world into our conception of the Web.  That will have to change.  As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison.  There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple.  As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution.  I doubt it will be the integration of Wikipedia with a mobile camera.  It will be something much more immediate.  Much more concrete.  Much more useful.  Perhaps something concerned with health.  Or with managing your carbon footprint.  Those two seem the most obvious to me.  But the real revolution will probably come from a direction no one expects.  It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution.  There was nothing special about San Francisco back in 1993 and 1994.  But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web.  Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands.  You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects.  Or, you can wait for someone else to come along and do it.  Because someone inevitably will.  Every day, the pressure grows.  The real world is clamoring to crawl into cyberspace.  You can open the door.

Using the Network for Business Success

I.  My, How Things Have Changed

When I came to Australia six years ago, to seek my fame and fortune, business communications had remained largely unchanged for nearly a century.  You could engage in face-to-face conversation – something humans have been doing since we learned to speak, countless thousands of years ago – or, if distance made that impossible, you could drop a letter into the post.  Australia Post is an excellent organization, and seems to get all of the mail delivered within a day or two – quite an accomplishment in a country as dispersed and diffuse as ours.

In the twentieth century, the telephone became the dominant form of business communication; Australia Post wired the nation up, and let us talk to one another.  Conversation, mediated by the telephone, became the dominant mode of communication.  About twenty years ago the facsimile machine dropped in price dramatically, and we could now send images over phone lines.

The facsimile translates images into data and back into images again.  That’s when the critical threshold was crossed: from that point on, our communications have always centered on data.  The Internet arrived in 1995, and broadband in 2001.  In the first years of Internet usage, electronic mail was both the ‘killer app’ and the thing that began to supplant the telephone for business correspondence.  Electronic mail is asynchronous – you can always pick it up later.  Email is non-local, particularly when used through a service such as Hotmail or Gmail – you can get it anywhere.  Until mobiles started to become pervasive for business uses, the telephone was always a hit-or-miss affair.  Electronic mail is a hit, every time.

Such was the business landscape when I arrived in Australia.  The Web had arrived, and businesses eagerly used it as a publishing medium – a cheap way of getting information to their clients and customers.  But the Web was changing.  It had taken nearly a decade of working with the Web, day-to-day, before we discovered that the Web could become a fully-fledged two-way medium: the Web could listen as well as talk.  That insight changed everything.  The Web morphed into a new beast, christened ‘Web 2.0’, and everywhere the Web invited us to interact, to share, to respond, to play, to become involved.  This transition has fundamentally changed business communication, and it’s my goal this morning to outline the dimensions of that transformation.

This transformation unfolds in several dimensions.  The first of these – and arguably the most noticeable – is how well-connected we are these days.  So long as we’re in range of a cellular radio signal, we can be reached.  The number of ways we can be reached is growing almost geometrically.  Five years ago we might have had a single email address.  Now we have several – certainly one for business, and one for personal use – together with an account on Facebook (nearly eight million of the 22 million Australians have Facebook accounts), perhaps another account on MySpace, another on Twitter, another on YouTube, another on Flickr.  We can get a message or maintain contact with someone through any of these connections.  Some individuals have migrated to Facebook for the majority of their communications – there’s no spam, and they’re assured the message will be delivered.  Among under-25s, electronic mail is seen as a technology of the ‘older generation’, something that one might use for work, but has no other practical value.  Text messaging and messaging-via-Facebook have replaced electronic mail.

This increased connectivity hasn’t come for free.  Each of us are now under a burden to maintain all of the various connections we’ve opened.  At the most basic level, we must at least monitor all of these channels for incoming messages.  That can easily get overwhelming, as each channel clamors for attention.

But wait.  We’ve dropped Facebook and Twitter into the conversation before I even explained what they are and how they work.  We just take them as a fact of life these days, but they’re brand new.  Facebook was unknown just three years ago, and Twitter didn’t zoom into prominence until eighteen months ago.  Let’s step back and take a look at what social networks are.  In a very real way, we’ve always known exactly what a social network is: since we were very small we’ve been reaching out to other people and establishing social relationships with them.  In the beginning that meant our mothers and fathers, sisters and brothers.  As we grew older that list might grow to include some of the kids in the neighborhood, or at pre-kindy, and then our school friends.  By the time we make it to university, that list of social relationships is actually quite long.  But our brains have limited space to store all those relationships – it’s actually the most difficult thing we do, the most cognitively all-encompassing task.  Forget physics – relationship are harder, and take more brainpower.

Nature has set a limit of about one hundred and fifty on the social relationships we can manage in our heads.  That’s not a static number – it’s not as though as soon as you reach 150, you’re done, full.  Rather, it’s a sign of how many relationships of importance you can manage at any one time.  None of us, not even the most socially adept, can go very much beyond that number.  We just don’t have the grey matter for it.

Hence, fifty years ago mankind invented the Rolodex – a way of keeping track of all the information we really should remember but can’t possibly begin to absorb.  A real, living Rolodex (and there are few of them, these days) are a wonder to behold, with notes scribbled in the margins, business cards stapled to the backs of the Rolodex cards, and a glorious mess of information, all alphabetically organized.  The Rolodex was mankind’s first real version of the modern, digital, social network.  But a Rolodex doesn’t think for itself; a Rolodex can not draw out the connections between the different cards.  A Rolodex does not make explicit what we know – we live in a very interconnected world, and many of our friends and associates are also friends and associates with our friends and associates.

That is precisely what Facebook gives us.  It makes those implicit connections explicit.  It allows those connections to become conduits for ever-greater-levels of connection.  Once those connections are made, once they become a regular feature of our life, we can grow beyond the natural limit of 150.  That doesn’t mean you can manage any of these relationships well – far from it.  But it does mean that you can keep the channels of communication open.  That’s really what all of these social networks are: turbocharged Rolodexes, which allow you to maintain far more relationships than ever before possible.

Once these relationships are established, something beings to happen quite naturally: people begin to share.  What they share is often driven by the nature of the relationship – though we’ve all seen examples where individuals ‘over-share’ inappropriately, confusing business and social channels of communication.  That sort of thing is very easy to do with social networks such as Facebook, because it doesn’t provide an easy method to send messages out to different groups of friends.  We might want a social network where business friends might get something very formal, while close friends might that that photo of you doing tequila shots at last weekend’s birthday party.  It’s a great idea, isn’t it?  But it can’t be done.  Not on Facebook, not on Twitter.  Your friends are all lumped together into one undifferentiated whole.  That’s one way that those social networks are very different from the ones inside our heads.  And it’s something to be constantly aware of when sharing through social networks.

That said, this social sharing has become an incredibly potent force.  More videos are uploaded to YouTube every day than all television networks all over the world produce in a year.  It may not be material of the same quality, but that doesn’t matter – most of those videos are only meant to be seen among a small group of family or friends.  We send pictures around, we send links around, we send music around (though that’s been cause for a bit of trouble), we share things because we care about them, and because we care about the people we’re sharing with.  Every act of sharing, business or personal, brings the sharer and the recipient closer together.  It truly is better to give than receive.  On the other hand, we’re also drowning in shared material.  There’s so much, coming from every corner, through every one of these social networks, there’s no possible way to keep up.  So, most of us don’t.  We cherry-pick, listening to our closest friends and associates: the things they share with us are the most meaningful.  We filter the noise and hope that we’re not missing anything very important.  (We usually are.)

In certain very specific situations, sharing can produce something greater than the sum of its parts.  A community can get together and decide to pool what it knows about a particular domain of knowledge, can ‘wise up’ by sharing freely.  This idea of ‘collective intelligence’ producing a shared storehouse of knowledge is the engine that drives sites like Wikipedia.  We all know Wikipedia, we all know how it works – anyone can edit anything in any article within it – but the wonder of Wikipedia is that it works so well.  It’s not perfectly accurate – nothing ever is  – but it is good enough to be useful nearly all the time.  Here’s the thing: you can come to Wikipedia ignorant and leave it knowing something.  You can put that knowledge to work to make better decisions than you would have in your state of ignorance.  Wikipedia can help you wise up.

Wikipedia isn’t the only example of shared knowledge.  A decade ago a site named TeacherRatings.com went online, inviting university students to provide ratings of their professors, lecturers and instructors.  Today it’s named RateMyProfessor.com, is owned by MTV Networks, and has over ten million ratings of one million instructors.  This font of shared knowledge has become so potent that students regularly consult the site before deciding which classes they’ll take next semester at university.  Universities can no longer saddle student with poor teachers (who may also be fantastic researchers).  There are bidding wars taking place for the lecturers who get the highest ratings on the site.  This sharing of knowledge has reversed the power relationship between a university and its students which stretches back nearly a thousand years.

Substitute the word ‘business’ for university and ‘customers’ for students and you see why this is so significant.  In an era where we’re hyperconnected, where people share, and share knowledge, things are going to work a lot differently than they did before.  These all-important relationships between businesses and their customers (potential and actual) have been completely rewritten.  Let’s talk about that.

II.  Breaking In

The most important thing you need to know about the new relationship between yourselves and your customers is that your customers are constantly engaging in a conversation about you.  At this point, you don’t know where those customers are, and what they’re saying.  They could be saying something via a text message, or a Facebook post, or an email, or on Twitter.  Any and all of these conversations about you are going on right now.  But you don’t know, so there’s no way you can participate in them.

I’ll give you an example I used my column in NETT magazine.  My mate John Allsopp (a big-time Web developer, working on the next generation of Web technologies) travels a lot for business.  Back in June, on a trip the US, he decided to give VAustralia’s Premium Economy class a try.  He was so pleased about the service – and the sleep he got – he immediately sent out a tweet: “At LAX waiting for flight to Denver. Best flight ever on VAustralia Premium Economy. Fantastic seat, service, and sleep. Hooked.”  That message went out to twelve hundred of John’s Twitter followers – many of whom are Australians.  It was quickly answered by a tweet from Cheryl Gledhill: “isn’t VAustralia the bomb!! My favourite airline at the moment… so roomy, and great entertainment, nice hosties, etc.”  That message went to Cheryl’s 250 followers.  I chimed in, too: “Precisely how I felt after my VA flights last month: hooked. Got 7 hours sleep each way. Worth the price.”  That message went out to fifty-two hundred of my followers – who are disproportionately Australian.

Just between the three of us, we might have reached as many as seven thousand people – individuals who are like ourselves – because like connects to like in social networks.  That means these are individuals who are likely to take advantage of VAustralia the next time they fly the transpacific route.  But here’s the sad thing: VAustralia had no idea this wonderful and loving conversation about their product was going on.  No idea at all.  You know what they were involved in?  An ad-agency dreamed-up ‘4320SYD’ campaign, which flew four mates to Los Angeles for three days, promising them free round-the-world flights on the various Virgin airlines if they sent at least two thousand tweets during their trip.  VAustralia – or rather, VAustralia’s ad agency – presumed that people with busy lives would spend some of their precious time and attention following four blokes spewing out line after line of inane chatter.  Naturally, the campaign disappeared without a trace.

If VAustralia had asked its agency to monitor Twitter, to keep its finger to the pulse of what was being said online, things could have turned out very differently.  Perhaps a VAustralia rep would have contacted John Allsopp directly, thanked him for his kind words, and offered him a $100 coupon for his next flight on V Australia Premium Economy.  VAustralia would have made a customer for life – and for a lot less than they spent on the ‘4320SYD’ campaign.

Marketers and agencies are still thinking in terms of mass markets and mass media.  While both do still exist, they don’t shape perception as they did a generation ago.  Instead, we turn to the hyperconnections we have with one another.  I can instantly ask Twitter for a review of a restaurant, a gadget, or a movie, and I do.  So do millions of others.  This is the new market, and this is the place where marketing – at least as we’ve known it – can not penetrate.

That’s one problem.  There’s another, and larger problem: what happens when you have an angry customer?  Let me tell you a story about my friend Kate Carruthers, who will be speaking with you later this morning.  On a recent trip to Queensland, she pulled out her American Express credit card to pay for a taxi fare.  Her card was declined.  Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more sensitive – only to find her card declined once again.

As it turned out, AMEX had cut her credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post.  So here’s Kate, far away from home, with a crook credit card.  Thank goodness she had another card with her, or it could have been quite a problem.  When she contacted AMEX to discuss the credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division.  That, for Kate, was the last straw.  She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

Kate is both a prolific user of Twitter and a very well connected individual.  There are over seven thousand individuals reading her tweets.  Seven thousand people who saw Kate ‘go nuclear’ over her bad treatment at the hands of AMEX.  Seven thousand people who will now think twice when an AMEX offer comes in the post, or when they pass by the tables that are ubiquitously in every airport and mall.  Everyone one of them will remember the ordeal Kate suffered – almost as if Kate were a close friend.

Does AMEX know that Kate went nuclear?  Almost certainly not.  They didn’t make any attempt to contact her after her outburst, so it’s fairly certain that this flew well underneath their radar.  But the damage to AMEX’s reputation is quantifiable: Kate is simply too hyperconnected to be ignored, or mistreated.  And that’s the world we’re all heading into.  As we all grow more and more connected, as we each individually reach thousands of others, slights against any one of us have a way of amplifying into enormous events, the kinds of mistakes that could, if repeated, bring a business to its knees.  AMEX, in its ignorant bliss, has no idea that it has shot itself in the foot.

While Kate expressed her extreme dissatisfaction with AMEX, its own marketing arm was busily cooking up a scheme to harness Twitter.  It’s Open Forum Pulse website shows you tweets from small businesses around the world.  It’s ironic, isn’t it?  AMEX builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it.  Just like VAustralia.  Perhaps that’s simply the way Big Business is going to play the social media revolution – like complete idiots.  You have an opportunity to learn from their mistakes.

There is a whole world out there engaging in conversation about you.  You need to be able to recognize that.  There are tools out there – like PeopleBrowsr – which make it easy for you to monitor those conversations.  You’ll need to think through a strategy which allows you to recognize and promote those positive conversations, while – perhaps more importantly – keeping an eye on the negative conversations.  An upset customer should be serviced before they go nuclear; these kinds of accidents don’t need to happen.  But you’ll need to be proactive in your listening.  Customers will no longer come to you to talk about you or your business.

III.  Breaking Out

The first step in any social media strategy for business is to embrace the medium.  Many business ban social media from their corporate networks, seeing them as a drain of time and attention.  Which is, in essence, saying that you don’t trust your own employees.  That you’re willing to infantilize them by blocking their network access.  This won’t work.  ‘Smartphones’ – that is, mobiles which have big screens, broadband connections, and full web browsers – have become increasingly popular in Australia.  Perhaps one third of all mobile handsets now qualify as smartphones.  Apple’s iPhone is simply the most visible of these devices, but they’re sold by many manufacturers, and, within a few years, they’ll be entirely pervasive: every mobile will be a smartphone.  A smartphone can access a social network just as easily – often more easily – than a desktop web browser.  Your employees have access to social networks all day long, unless you ask them to leave their mobiles at the front desk.

Just as we expect that employees won’t spend their days sending text messages to the friends, so an employer can expect that employees are sensible enough to regulate their own net usage.  A ‘net nanny’ is not required.  Mutual respect is.  Yes, the network is a powerful thing – it can be used to spread rumor and innuendo, can be used to promote or undermine – but employees understand this.  We all use the network at home.  We know what it’s good for.  Bringing it into the office requires some common sense, and perhaps a few guidelines.  The ABC recently released their own guidelines for social media, and they’re a brilliant example of the parsimony and common sense which need to underwrite all of our business efforts online.  Here they are:

•                do not mix professional and personal in ways likely to bring the ABC into disrepute,

•                do not undermine your effectiveness at work,

•                do not imply ABC endorsement of personal views, and,

•                do not disclose confidential information obtained at work.

There’s nothing hard about this list – for either employer or employee – yet it tells everyone exactly where they stand and what’s expected of them.  Employers are expected to trust their employees.  Employees are expected to reciprocate that trust by acting responsibly.  All in all, a very adult relationship.

Once that adult relationship has been established around social media, you have a unique opportunity to let your employees become your eyes and ears online.  Most small to medium-sized businesses have neither the staff nor the resources to dedicate a specific individual to social media issues.  In fact, that’s not actually a good idea.  When things ‘hot up’ for your business, any single individual charged with handling all things social media will quickly overload, with too much coming in through too many channels simultaneously.  That means something will get overlooked.  Something will get dropped.  And a potential nuclear event – something that could be defused or forestalled if responded to in a timely manner – will slip through the cracks.

Social media isn’t a one-person job.  It’s a job for the entire organization.  You need to give your employees permission to be out there on Facebook, on Twitter, on the blogs and in the net’s weirder corners – wherever their searches might lead them.  You need to charge them with the responsibility of being proactive, to go out there and hunt down those conversations of importance to your and your business.  Of course, they should be polite, and only offer help where it is needed, but, if they can do that, you will increase your reach and your presence immeasurably.  And you will have done it without spending a dime.

Those of you with a background in marketing have just broken out in cold sweat.  This is nothing like what they taught you at university, nothing like what you learned on the job.  That’s the truth of it.  But what you learned on the job is what VAustralia and AMEX are now up to – that is, complete and utter failure.  But, you’re thinking, what about message discipline?  How can we have that many people speaking for the organization?  Won’t it be chaos?

The answer, in short, is yes.  It will be chaos.  But not in a bad way.  You’ll have your own army out there, working for you.  Employees will know enough to know when they can speak for the organization, and when they should be silent.  (If they don’t know, they’ll learn quickly.)  Will it be messy?  Probably.  But the world of social media is not neat.  It is not based on image and marketing and presentation.  It is based on authenticity, on relationships that are established and which develop through time.  It is not something that can be bought or sold like an ad campaign.  It is, instead, something more akin to friendship – requiring time and tending and more than a little bit of love.

This means that employees will need some time to spend online, probably a few minutes, several times a day, to keep an eye on things.  To keep watch.  To make sure a simmering pot doesn’t suddenly boil over.

That’s the half of it.  The other half is how you use social media to reach out.  Many companies set up Twitter and Facebook accounts and use them to send useless spam-like messages to anyone who cares to listen.  Please don’t do this. Social media is not about advertising.  In fact, it’s anti-advertising.  Social media is an opportunity to connect.  If you’re a furniture maker, for example, perhaps you’d like to have a public conversation with designers and homeowners about the art and business of making furniture.  Social media is precisely where you get to show off the expertise which keeps you in business – whatever that might be.  Lawyers can talk about law, accountants about accounting, and printers about printing.  Business, especially small business, is all about passion, and social media is a passion amplifier.  Let your passions show and people will respond.  Some of them will become customers.

So please, when you leave here today, setup those Facebook and Twitter accounts.  But when you’ve done that, step back and have a think.  Ask yourself, “How can I represent my business in a way that invites conversation?”  Once you’ve answered that, you’ve also answered the other important question – how do you translate that conversation into business.  Without the conversation you’ve got nothing.  But, once that conversation has begun, you have everything you need.

Those are the basics.  Everything else you’ll learn as you go along.  Social media isn’t difficult, though it takes time to master.  Just like any relationship, you’ll get out of it what you put into it.  And it isn’t going away.  It’s not a fad.  It’s the new way of doing business.  The efforts you make today will, in short order, reward you a hundred-fold.  That’s the promise of network: it will bring you success.

Sharing Power (Aussie Rules)

I: Family Affairs

In the US state of North Carolina, the New York Times reports, an interesting experiment has been in progress since the first of February. The “Birds and Bees Text Line” invites teenagers with any questions relating to sex or the mysteries of dating to SMS their question to a phone number. That number connects these teenagers to an on-duty adult at the Adolescent Pregnancy Prevention Campaign. Within 24 hours, the teenager gets a reply to their text. The questions range from the run-of-the-mill – “When is a person not a virgin anymore?” – and the unusual – “If you have sex underwater do u need a condom?” – to the utterly heart-rending – “Hey, I’m preg and don’t know how 2 tell my parents. Can you help?”

The Birds and Bees Text Line is a response to the slow rise in the number of teenage pregnancies in North Carolina, which reached its lowest ebb in 2003. Teenagers – who are given state-mandated abstinence-only sex education in school – now have access to another resource, unmediated by teachers or parents, to prevent another generation of teenage pregnancies. Although it’s early days yet, the response to the program has been positive. Teenagers are using the Birds and Bees Text Line.

It is precisely because the Birds and Bees Text Line is unmediated by parental control that it has earned the ire of the more conservative elements in North Carolina. Bill Brooks, president of the North Carolina Family Policy Council, a conservative group, complained to the Times about the lack of oversight. “If I couldn’t control access to this service, I’d turn off the texting service. When it comes to the Internet, parents are advised to put blockers on their computer and keep it in a central place in the home. But kids can have access to this on their cell phones when they’re away from parental influence – and it can’t be controlled.”

If I’d stuffed words into a straw man’s mouth, I couldn’t have come up with a better summation of the situation we’re all in right now: young and old, rich and poor, liberal and conservative. There are certain points where it becomes particularly obvious, such as with the Birds and Bees Text Line, but this example simply amplifies our sense of the present as a very strange place, an undiscovered country that we’ve all suddenly been thrust into. Conservatives naturally react conservatively, seeking to preserve what has worked in the past; Bill Brooks speaks for a large cohort of people who feel increasingly lost in this bewildering present.

Let us assume, for a moment, that conservatism was in the ascendant (though this is clearly not the case in the United States, one could make a good argument that the Rudd Government is, in many ways, more conservative than its predecessor). Let us presume that Bill Brooks and the people for whom he speaks could have the Birds and Bees Text Line shut down. Would that, then, be the end of it? Would we have stuffed the genie back into the bottle? The answer, unquestionably, is no.

Everyone who has used or even heard of the Birds and Bees Text Line would be familiar with what it does and how it works. Once demonstrated, it becomes much easier to reproduce. It would be relatively straightforward to take the same functions performed by the Birds and Bees Text Line and “crowdsource” them, sharing the load across any number of dedicated volunteers who might, through some clever software, automate most of the tasks needed to distribute messages throughout the “cloud” of volunteers. Even if it took a small amount of money to setup and get going, that kind of money would be available from donors who feel that teenage sexual education is a worthwhile thing.

In other words, the same sort of engine which powers Wikipedia can be put to work across a number of different “platforms”. The power of sharing allows individuals to come together in great “clouds” of activity, and allows them to focus their activity around a single task. It could be an encyclopedia, or it could be providing reliable and judgment-free information about sexuality to teenagers. The form matters not at all: what matters is that it’s happening, all around us, everywhere throughout the world.

The cloud, this new thing, this is really what has Bill Brooks scared, because it is, quite literally, ‘out of control’. It arises naturally out of the human condition of ‘hyperconnection’. We are so much better connected than we were even a decade ago, and this connectivity breeds new capabilities. The first of these capabilities are the pooling and sharing of knowledge – or ‘hyperintelligence’. Consider: everyone who reads Wikipedia is potentially as smart as the smartest person who’s written an article in Wikipedia. Wikipedia has effectively banished ignorance born of want of knowledge. The Birds and Bees Text Line is another form of hyperintelligence, connecting adults with knowledge to teenagers in desperate need of that knowledge.

Hyperconnectivity also means that we can carefully watch one another, and learn from one another’s behaviors at the speed of light. This new capability – ‘hypermimesis’ – means that new behaviors, such as the Birds and Bees Text Line, can be seen and copied very quickly. Finally, hypermimesis means that that communities of interest can form around particular behaviors, ‘clouds’ of potential. These communities range from the mundane to the arcane, and they are everywhere online. But only recently have they discovered that they can translate their community into doing, putting hyperintelligence to work for the benefit of the community. This is the methodology of the Adolescent Pregnancy Prevention Campaign. This is the methodology of Wikipedia. This is the methodology of Wikileaks, which seeks to provide a safe place for whistle-blowers who want to share the goods on those who attempt to defraud or censor or suppress. This is the methodology of ANONYMOUS, which seeks to expose Scientology as a ridiculous cult. How many more examples need to be listed before we admit that the rules have changed, that the smooth functioning of power has been terrifically interrupted by these other forces, now powers in their own right?

II: Affairs of State

Don’t expect a revolution. We will not see masses of hyperconnected individuals, storming the Winter Palaces of power. This is not a proletarian revolt. It is, instead, rather more subtle and complex. The entire nature of power has changed, as have the burdens of power. Power has always carried with it the ‘burden of omniscience’ – that is, those at the top of the hierarchy have to possess a complete knowledge of everything of importance happening everywhere under their control. Where they lose grasp of that knowledge, that’s the space where coups, palace revolutions and popular revolts take place.

This new power that flows from the cloud of hyperconnectivity carries a different burden, the ‘burden of connection’. In order to maintain the cloud, and our presence within it, we are beholden to it. We must maintain each of the social relationships, each of the informational relationships, each of the knowledge relationships and each of the mimetic relationships within the cloud. Without that constant activity, the cloud dissipates, evaporating into nothing at all.

This is not a particularly new phenomenon; Dunbar’s Number demonstrates that we are beholden to the ‘tribe’ of our peers, the roughly 150 individuals who can find a place in our heads. In pre-civilization, the cloud was the tribe. Should the members of tribe interrupt the constant reinforcement of their social, informational, knowledge-based and mimetic relationships, the tribe would dissolve and disperse – as happens to a tribe when it grows beyond the confines of Dunbar’s Number.

In this hyperconnected era, we can pick and choose which of our human connections deserves reinforcement; the lines of that reinforcement shape the scope of our power. Studies of Japanese teenagers using mobiles and twenty-somethings on Facebook have shown that, most of the time, activity is directed toward a small circle of peers, perhaps six or seven others. This ‘co-presence’ is probably a modern echo of an ancient behavior, presumably related to the familial unit.

While we might desire to extend our power and capabilities through our networks of hyperconnections, the cost associated with such investments is very high. Time spent invested in a far-flung cloud is time that lost on networks closer to home. Yet individuals will nonetheless often dedicate themselves to some cause greater than themselves, despite the high price paid, drawn to some higher ideal.

The Obama campaign proved an interesting example of the price of connectivity. During the Democratic primary for the state of New York (which Hilary Clinton was expected to win easily), so many individuals contacted the campaign through its website that the campaign itself quickly became overloaded with the number of connections it was expected to maintain. By election day, the campaign staff in New York had retreated from the web, back to using mobiles. They had detached from the ‘cloud’ connectivity they used the web to foster, instead focusing their connectivity on the older model of the six or seven individuals in co-present connection. The enormous cloud of power which could have been put to work in New York lay dormant, unorganized, talking to itself through the Obama website, but effectively disconnected from the Obama campaign.

For each of us, connectivity carries a high price. For every organization which attempts to harness hyperconnectivity, the price is even higher. With very few exceptions, organizations are structured along hierarchical lines. Power flows from bottom to the top. Not only does this create the ‘burden of omniscience’ at the highest levels of the organization, it also fundamentally mismatches the flows of power in the cloud. When the hierarchy comes into contact with an energized cloud, the ‘discharge’ from the cloud to the hierarchy can completely overload the hierarchy. That’s the power of hyperconnectivity.

Another example from the Obama campaign demonstrates this power. Project Houdini was touted out by the Obama campaign as a system which would get the grassroots of the campaign to funnel their GOTV results into a centralized database, which could then be used to track down individuals who hadn’t voted, in order to offer them assistance in getting to their local polling station. The campaign grassroots received training in Project Houdini, when through a field test of the software and procedures, then waited for election day. On election day, Project Houdini lasted no more than 15 minutes before it crashed under the incredible number of empowered individuals who attempted to plug data into Project Houdini. Although months in the making, Project Houdini proved that a centralized and hierarchical system for campaign management couldn’t actually cope with the ‘cloud’ of grassroots organizers.

In the 21st century we now have two oppositional methods of organization: the hierarchy and the cloud. Each of them carry with them their own costs and their own strengths. Neither has yet proven to be wholly better than the other. One could make an argument that both have their own roles into the future, and that we’ll be spending a lot of time learning which works best in a given situation. What we have already learned is that these organizational types are mostly incompatible: unless very specific steps are taken, the cloud overpowers the hierarchy, or the hierarchy dissipates the cloud. We need to think about the interfaces that can connect one to the other. That’s the area that all organizations – and very specifically, non-profit organizations – will be working through in the coming years. Learning how to harness the power of the cloud will mark the difference between a modest success and overwhelming one. Yet working with the cloud will present organizational challenges of an unprecedented order. There is no way that any hierarchy can work with a cloud without becoming fundamentally changed by the experience.

III: Affair de Coeur

All organizations are now confronted with two utterly divergent methodologies for organizing their activities: the tower and the cloud. The tower seeks to organize everything in hierarchies, control information flows, and keep the power heading from bottom to top. The cloud isn’t formally organized, pools its information resources, and has no center of power. Despite all of its obvious weaknesses, the cloud can still transform itself into a formidable power, capable of overwhelming the tower. To push the metaphor a little further, the cloud can become a storm.

How does this happen? What is it that turns a cloud into a storm? Jimmy Wales has said that the success of any language-variant version of Wikipedia comes down to the dedicated efforts of five individuals. Once he spies those five individuals hard at work in Pashtun or Khazak or Xhosa, he knows that edition of Wikipedia will become a success. In other words, five people have to take the lead, leading everyone else in the cloud with their dedication, their selflessness, and their openness. This number probably holds true in a cloud of any sort – find five like-minded individuals, and the transformation from cloud to storm will begin.

At the end of that transformation there is still no hierarchy. There are, instead, concentric circles of involvement. At the innermost, those five or more incredibly dedicated individuals; then a larger circle of a greater number, who work with that inner five as time and opportunity allow; and so on, outward, at decreasing levels of involvement, until we reach those who simply contribute a word or a grammatical change, and have no real connection with the inner circle, except in commonality of purpose. This is the model for Wikipedia, for Wikileaks, and for ANONYMOUS. This is the cloud model, fully actualized as a storm. At this point the storm can challenge any tower.

But the storm doesn’t have things all its own way; to present a challenge to a tower is to invite the full presentation of its own power, which is very rude, very physical, and potentially very deadly. Wikipedians at work on the Farsi version of the encyclopedia face arrest and persecution by Iran’s Revolutionary Guards and religious police. Just a few weeks ago, after the contents of the Australian government’s internet blacklist was posted to Wikileaks, the German government invaded the home of the man who owns the domain name for Wikileaks in Germany. The tower still controls most of the power apparatus in the world, and that power can be used to squeeze any potential competitors.

But what happens when you try to squeeze a cloud? Effectively, nothing at all. Wikipedia has no head to decapitate. Jimmy Wales is an effective cheerleader and face for the press, but his presence isn’t strictly necessary. There are over 2000 Wikipedians who handle the day-to-day work. Locking all of them away, while possible, would only encourage further development in the cloud, as other individuals moved to fill their places. Moreover, any attempt to disrupt the cloud only makes the cloud more resilient. This has been demonstrated conclusively from the evolution of ‘darknets’, private file-sharing networks, which grew up as the legal and widely available file-sharing networks, such as Napster, were shut down by the copyright owners. Attacks on the cloud only improve the networks within the cloud, only make the leaders more dedicated, only increase the information and knowledge sharing within the cloud. Trying to disperse a storm only intensifies it.

These are not idle speculations; the tower will seek to contain the storm by any means necessary. The 21st century will increasingly look like a series of collisions between towers and storms. Each time the storm emerges triumphant, the tower will become more radical and determined in its efforts to disperse the storm, which will only result in a more energized and intensified storm. This is not a game that the tower can win by fighting. Only by opening up and adjusting itself to the structure of the cloud can the tower find any way forward.

What, then, is leadership in the cloud? It is not like leadership in the tower. It is not a position wrought from power, but authority in its other, and more primary meaning, ‘to be the master of’. Authority in the cloud is drawn from dedication, or, to use rather more precise language, love. Love is what holds the cloud together. People are attracted to the cloud because they are in love with the aim of the cloud. The cloud truly is an affair of the heart, and these affairs of the heart will be the engines that drive 21st century business, politics and community.

Author and pundit Clay Shirky has stated, “The internet is better at stopping things than starting them.” I reckon he’s wrong there: the internet is very good at starting things that stop things. But it is very good at starting things. Making the jump from an amorphous cloud of potentiality to a forceful storm requires the love of just five people. That’s not much to ask. If you can’t get that many people in love with your cause, it may not be worth pursing.

Conclusion: Managing Your Affairs

All 21st century organizations need to recognize and adapt to the power of the cloud. It’s either that or face a death of a thousand cuts, the slow ebbing of power away from hierarchically-structured organizations as newer forms of organization supplant them. But it need not be this way. It need not be an either/or choice. It could be a future of and-and-and, where both forms continue to co-exist peacefully. But that will only come to pass if hierarchies recognize the power of the cloud.

This means you.

All of you have your own hierarchical organizations – because that’s how organizations have always been run. Yet each of you are surrounded by your own clouds: community organizations (both in the real world and online), bulletin boards, blogs, and all of the other Web2.0 supports for the sharing of connectivity, information, knowledge and power. You are already halfway invested in the cloud, whether or not you realize it. And that’s also true for people you serve, your customers and clients and interest groups. You can’t simply ignore the cloud.

How then should organizations proceed?

First recommendation: do not be scared of the cloud. It might be some time before you can come to love the cloud, or even trust it, but you must at least move to a place where you are not frightened by a constituency which uses the cloud to assert its own empowerment. Reacting out of fright will only lead to an arms race, a series of escalations where the your hierarchy attempts to contain the cloud, and the cloud – which is faster, smarter and more agile than you can ever hope to be – outwits you, again and again.

Second: like likes like. If you can permute your organization so that it looks more like the cloud, you’ll have an easier time working with the cloud. Case in point: because of ‘message discipline’, only a very few people are allowed to speak for an organization. Yet, because of the exponential growth in connectivity and Web2.0 technologies, everyone in your organization has more opportunities to speak for your organization than ever before. Can you release control over message discipline, and empower your organization to speak for itself, from any point of contact? Yes, this sounds dangerous, and yes, there are some dangers involved, but the cloud wants to be spoken to authentically, and authenticity has many competing voices, not a single monolithic tone.

Third, and finally, remember that we are all involved in a growth process. The cloud of last year is not the cloud of next year. The answers that satisfied a year ago are not the same answers that will satisfy a year from now. We are all booting up very quickly into an alternative form of social organization which is only just now spreading its wings and testing its worth. Beginnings are delicate times. The future will be shaped by actions in the present. This means there are enormous opportunities to extend the capabilities of existing organizations, simply by harnessing them to the changes underway. It also means that tragedies await those who fight the tide of times too single-mindedly. Our culture has already rounded the corner, and made the transition to the cloud. It remains to be seen which of our institutions and organizations can adapt themselves, and find their way forward into sharing power.