The Social Sense

I: On Top of the World

WebEarth.org image

I’ve always wanted to save the world.  When I was younger, and more messianic, I thought I might have to do it all myself.  As the world knocked sense into me, I began to see salvation as a shared project, a communal task.  I have always had a special vision for that project, one that came to me when I first started working in virtual reality, twenty years ago.  I knew that it would someday be possible for us to ‘see’ the entire world, to apprehend it as a whole.

Virtual reality, and computer visualization in general, is very good at revealing things that we can’t normally see, either because they’re too big, or we’re too large, or they’re too fast, or we’re too quick.  The problem of scale is one at the center of human being: man is the measure of all things.  But where that measuring rod falls short, leaving us unable to apprehend the totality of experience, we live in shadow, part of the truth forever beyond our grasp.

The computer has become microscope, telescope, ultra-high-speed and time-lapse camera.  Using little more than a sharpened needle, we can build atomic-force microscopes, feeling our way across the edges of individual atoms.  Using banks of supercomputers, we crunch through microwave data, painting a picture of the universe in its first microseconds.  We can simulate chemical reactions so fast we had always assumed them to be instantaneous.  And we can speed the ever-so-gradual movement of the continents, making them seem like a dance.

Twenty years ago, when this was more theoretical than commonplace, I realized that we would someday have systems to show us the Earth, just as it is, right in this moment.  I did what I could with the tools I had at my disposal to create something that pointed toward what I imagined, but I have this persistent habit of being ahead of the curve.  What I created – WebEarth – was a dim reflection of what I knew would one day be possible.

In the middle of 1995 I was invited to be a guest of honor at the Interactive Media Festival in Los Angeles.  The festival showcased a number of very high-end interactive projects, including experiments in digital evolution, artificial life, and one project that stopped me in my tracks, a work that changed everything for me.

On 140cm television screen, I saw a visualization of Earth from space.  Next to the screen, I saw a trackball – inflated to the size of a beachball.  I put my hand on the trackball and spun it around; the Earth visualization followed it, move for move.  That’s nice, I thought, but not really terrifically interesting.  There was a little console with a few buttons arrayed off to one side of the trackball.  When you pressed one of those buttons, you began to zoom in.  Nothing special there, but as you zoomed in, the image began to resolve itself, growing progressively more detailed as you dived down from outside the orbit of the Moon, landing at street level in Berlin, or Tokyo, or Los Angeles.

This was T_Vision, and if it all sounds somewhat unexceptional today, sixteen years ago it took a half-million-dollar graphics supercomputer to create the imagery drawn across that gigantic display, and a high-speed network link to keep it fed with all the real-time data integrated into its visualizations.  T_Vision could show you weather information from anywhere it had been installed, because each installation spoke to the others across the still-new-and-shiny Internet, sharing local data.  The goal was to have T_Vision installations in all of the major cities around the world, so that any T_Vision would be able to render a complete picture of the entire Earth, at it is, in the moment.

That never happened; half a million dollars per city was too big an ask.  But I knew that I’d seen my vision realized in T_Vision, and I expected that it would become the prototype for systems to follow.  I wrote about T_Vision in my book The Playful World, because I knew that these simulations of Earth would be profoundly important in the 21st century: they provide an ideal tool for understanding the impacts of our behavior.

Our biggest problems arise when we fail to foresee the long-term consequences of our actions.  Native Americans once considered ‘the seventh generation’ when meditating on their actions, but long-term planning is difficult in a world of every-increasing human complexity.  So much depends on so much, everything interwoven into everything else, it almost seems as though we only have two options: frozen in a static moment which admits no growth, or, blithely ignorant, charging ahead, and devil take the hindmost.

Two options, until today.  Because today we can pop Google Earth onto our computers or our mobiles and zoom down from space to the waters of Lake Crackenback.  We can integrate cloud cover and radar and rainfall.  And we can do this all on computers that cost just a few hundreds of dollars, connected to a global Internet with sensors near and far, bringing us every bit of data we might desire.

We have this today, but we live in the brief moment between the lightning and the thunder.  The tool has been given to us, but we have not yet learned how to use it, or what its use will mean.  This is where I want to begin today, because this is a truly new thing: we can see ourselves and our place within the world.  We were blind, but now can see.  In this light we can put to rights the mistakes we made while we lived in darkness.

 

II: All Together Now

A lot has transpired in the past sixteen years.  Computers double in speed or halve in cost every twenty-four months, so the computers of 2011 are a fifty times faster, and cost, in relative terms, a quarter the price.  Nearly everyone uses them in the office, and most homes have at least one, more often than not connected to high-speed broadband Internet, something that didn’t exist sixteen years ago.  Although this is all wonderful and has made modern life a lot more interesting, it’s nothing next to the real revolution that’s taken place.

In 1995, perhaps fifteen or twenty percent of Australians owned mobiles.  They were bulky, expensive to own, expensive to use, yet we couldn’t get enough of them.  By the time of my first visit to Australia, in 1997, just over half of all Australians owned mobiles.  A culture undergoes a bit of a sea-change when mobiles pass this tipping point.  This was proven during an evening I’d organized with friends at Sydney’s Darling Harbour.  Half of us met at the appointed place and time, the rest were nowhere to be found.  We could have waited them to arrive, or we could have gone off on our own, fragmenting the party.  Instead we called, and told them to meet us at a pub on Oxford Street.  Problem solved.  It’s this simple social lubrication (no one is late anymore, just delayed) which makes mobiles intensely desirable.

In 2011, the mobile subscription rate in Australia is greater than 115%.  This figure seems ridiculous until you account for the number of individuals who have more than one mobile (one for work and one for personal use), or some other device – such as an iPad – that connects to wireless 3G broadband.  Children don’t get their first mobile until around grade 3 (or later), and a lot of seniors have skipped the mobile entirely.  But the broad swath of the population between 8 and 80 all have a mobile or two, and more.

Life in Australia is better for the mobile, but doesn’t hold a candle to its impact in the developing world.  From fishermen on the Kerala coast of India, to vegetable farmers in Kenya, to barbers in Pakistan, the mobile creates opportunities for every individual connected through it, opportunities which quickly translate into economic advantage.  Economists have definitively established a strong correlation between the aggregate connectivity of nation and its growth.  Connected individuals earn more; so do connected nations.

Because the mobile means money, people have eagerly adopted it.  This is the real transformation over the last sixteen years.  Over that time we went from less than a hundred million mobile subscribers to somewhere in the range of six billion.  There’s just under seven billion people on Earth, and even accounting for those of us who have more than one subscription, this means three quarters all of humanity Earth now use a mobile.  As in Australia, the youngest and the very oldest are exempt, but as we become a more urban civilization – over half of us now live in cities – the pace and coordination of urban life is set by the mobile.

 

III:  I, Spy

The lost iPad, found

We live in a world of mobile devices.  They’re in hand, tucked in a pocket, or tossed into a handbag, but sometimes we leave them behind.  At the end of long business trip, on a late night flight back to Sydney, I left my iPad in the seatback pocket of an aircraft.  I didn’t discover this for eighteen hours, until I unpacked my bags and noted it had gone missing.  “Well, that’s it,” I thought.  “It’s gone for good.”  Then I remembered that Apple offers a feature on their iPhones and iPads, through their Me.com website, that lets you locate lost devices.  I figured I had nothing to lose, so I launched the site, waited a few moments, then found my iPad.  Not just the city, or the suburb, but down to the neighborhood and street and house – even the part of the house!  There it was, on Google’s high-resolution satellite imagery, phoning home.

What to do?  The neighborhood wasn’t all that good – next to Mount Druitt in Sydney’s ‘Wild West’ – so I didn’t fancy ringing the bell and asking politely.  Instead I phoned the police, who came by to take a report.  When they asked how I knew where my iPad was, I showed them the website.  They were gobsmacked.  In their perfect world, no thief can ever make away with anything, because it’s telling its owner and the police about its every movement.

I used another feature of ‘Find my iPad’ to send a message to its display: “Hello, I’m lost!  Please return me for a reward.’  About 36 hours later I received an email from the fellow who had ended up with my iPad (his mother cleans aircraft), offering to return it.  The next day, in a scene straight from a Cold War-era spy movie, we met on a street corner in Ultimo.  He handed me my iPad, I thanked him and handed him a reward, then we each went our separate ways.

Somewhere in the middle of this drama, I realized that I possessed the first of what will be many intelligent and trackable devices to follow.  In the beginning they’ll look like mobiles, like tablets and computers, but they’ll begin to look like absolutely anything you like.  This is the kind of high-technology favored by ‘Q’ in James Bond movies and by the CIA in covert operations, but it has always been expensive.  Now it’s cheap and easy-to-use and tiny.

I tend to invent things after I have that kind of brainwave, so I immediately dreamed up a ‘smart’ luggage tag, that you’d clip onto your baggage when you check in at the terminal.  If your baggage gets lost, it can ‘phone home’ to let you know just where it’s ended up – information you can give to your airline.  Or you can put one into your car, so you can figure out just where you left it in that vast parking lot.  Or hang one onto your child as you go out into a crowded public place.  A group of very smart Sydney engineers had already shown me something similar – Tingo Family – which uses the tracking capabilities of smartphones to create that sort of capability.  But smartphones are expensive, and overkill; couldn’t this cost a lot less?

I did some research on my favorite geek websites, and found that I could build something similar from off-the-shelf parts for about $150.  That sounds expensive, but that’s because I’m purchasing in single-unit quantities.  When you purchase 10,000 of something electronic, they don’t cost nearly as much.  I’m sure something could be put together for less than fifty dollars that would have the two necessary components: a GPS receiver, and a 3GSM mobile broadband connection.  With those two pieces, it becomes possible to track anything, anywhere you can get a signal – which, in 2011, is most of the planet.

To track something – and talk to it – costs fifty dollars today, but, like clockwork, every twenty-four months that cost falls by fifty percent.  In 2013, it’s $25.00, in 2015 it’s $12.50, and so on, so that ten years from now it’s only a bit more than a dollar.  Eventually it becomes almost free.

This is the world we will be living in.  Anything of any importance to us – whether expensive or cheap as chips – will be sensing, listening, and responding.  Everything will be aware of where it is, and where it should be.  Everything will be aware of the temperature, the humidity, the light level, the altitude, its energy consumption, and the other things around it which are also aware of the temperature, humidity, light level, altitude, energy consumption, and other things around them.

This is the sensor revolution, which is sometimes called ‘the Web of things’ or ‘Web3.0’.  We can see it coming, even if we can’t quite see what happens once it comes.  We didn’t understand that mobiles would help poor people earn more money until everyone, everywhere got a mobile.  These things aren’t easy to predict in advance, because they are the product of complex interactions between people and circumstances.  Even so, we can start to see how all of this information provided by our things feeds into our most innate human characteristic – the need to share.

 

IV: Overshare

Last Thursday I was invited to the launch of the ‘Imagine Cup’, a Microsoft-sponsored contest where students around the world use technology to develop solutions for the big problems facing us.  At the event I met the winners of the 2008 Imagine Cup, two Australians – Ed Hooper and Long Zheng.  They told me about their winning entry, Project SOAK.  That stands for Smart Operational Agriculture Kit.  It’s essentially a package of networked sensors and software that a farmer can use to know precisely when land needs water, and where.  Developed in the heart of the drought, Project SOAK is an innovative answer to the permanent Australian problem of water conservation.

I asked them how much these sensors cost, back in 2008.  To measure temperature, rainfall, dam depth, humidity, salinity and moisture would have cost around fifty dollars.  Fifty dollars in 2008 is about one dollar in 2020.  At that price point, a large farm, with thousands of hectares, could be covered with SOAK sensors for just a few tens of thousands of dollars, but would save the farmer water, time, and money for many years to come.  The farmer would be able to spread eyes over all of their land, and the computer, eternally vigilant, would help the farmer grind through the mostly-boring data spat out by these thousands of eyes.

That’s a snapshot of the world of 2020, a snapshot that will be repeated countless times, as sensors proliferate throughout every part of our planet touched by human beings: our land and our cities and our vehicles and our bodies.  Everything will have something listening, watching, reporting and responding.

We can already do this, even without all of this cheap sensing, because our connectivity creates a platform where we as ‘human sensors’ can share the results of our observations.  Just a few weeks ago, a web-based project known as ‘Safecast’ launched.  Dedicated to observing and recording radiation levels around the Fukushima nuclear reactor – which melted down following the March 11 2011 earthquake and tsunami – Safecast invites individuals throughout Japan to take regular readings of the ‘background’ radiation, then post them to the Safecast website.  These results are ‘mashed up’ with Google Maps, and presented for anyone to explore, both as current results, and as a historical path of radiation levels through time in a particular area.

Safecast exists because the Japanese government has failed to provide this information to its own people (perhaps to avoid unduly alarming them), filling a gap in public knowledge by ‘crowdsourcing’ the sensing task across thousands of willing participants.  People, armed with radiation dosimeters and Geiger counters, are the sensors.  People, typing their observations into computers, are the network.  Everything that we will soon be able to do automatically we can already do by hand, if there is sufficient need.

Necessity is the mother of invention; need is the driver for innovation.  In Japan they collect data about soil and water radiation, to save themselves from cancer.  In the United States, human sensors collect data about RBT checkpoints, to save themselves from arrest.  You can purchase a smartphone app that allows anyone to post the location of an RBT checkpoint to a crowdsourced database.  Anyone else with the app can launch it and see how to avoid being caught drink driving.  Although we may find the morality disagreeable, the need is there, and an army of human sensors set to work to meet that need.

Now that we’re all connected, we’ve found that connectivity is more than just keeping in touch with family, friends and co-workers.  It brings an expanded awareness, as each of us shares the points of interest peculiar to our tastes.  In the beginning, we shared bad jokes, cute pictures of kittens, and chain letters.  But we’ve grown up, and as we’ve matured, our sharing has taken on a focus and depth that gives it real power: people share what they know to fill the articles of Wikipedia, read their counters and plug results into Safecast, spot the coppers and share that around too – as they did in the central London riots in February.

It’s uncontrollable, it’s ungovernable, but all this sharing serves a need.  This is all human potential that’s been bottled up, constrained by the lack of connectivity across the planet.  Now that this barrier is well and truly down, we have unprecedented capability to pool our eyes, ears and hands, putting ourselves to work toward whatever ends we might consider appropriate.

Let’s give that some thought.

 

V:  Mother Birth

To recap: six billion of us now have mobiles, keeping us in close connection with one another.  This connectivity creates a platform for whatever endeavors we might choose to pursue, from the meaningless, to the momentary, to the significant and permanent.  We are human sensors, ready to observe and report upon anything we find important; chances are that if we find something important, others will as well.

All of that human activity is colliding head-on with the sensor revolution, as electronics become smaller and smarter, leading eventually to a predicted ‘smart dust’ where sensors become a ubiquitous feature of the environment.  We are about to gain a certain quality of omnipresence; where our sensors are, our minds will follow.   We are everywhere connected, and soon will be everywhere aware.

This awareness grants us the ability to see the consequences of our activities.  We can understand why burning or digging or watering here has an effect there, because, even in a complex ecosystem, we can trace the delicate connections that outline our actions.  The computer, with its infinitely patient and infinitely deep memory, is an important partner in this task, because it helps us to detect and illustrate the correlations that become a new and broader understanding of ourselves.

This is not something restricted to the biggest and grandest challenges facing us.  It begins more humbly and approachably with the minutiae of every day life: driving the car, using the dishwasher, or organizing a ski trip.  These activities no longer exist in isolation, but are recorded and measured and compared: could that drive be shorter, that wash cooler, that ski trip more sustainable?  This transition is being driven less by altruism than by economics.  Global sustainability means preserving the planet, but individual sustainability means a higher quality of life with lower resource utilization.  As that point becomes clear – and once there is sufficient awareness infrastructure to support it – sustainability becomes another ‘on tap’ feature of the environment, much as electricity and connectivity are today.

This will not be driven by top-down mandates.  Although our government is making moves toward sustainability, market forces will drive us to sustainability as the elements of the environment become continually more precious.  Intelligence is a fair substitute for almost any other resource – up to a point.  A car won’t run on IQ alone, but it will go a lot further on a tank of petrol if intelligently designed.

We can do more than act as sensors and share data:  we can share our ideas, our frameworks and solutions for sustainability.  We have the connectivity – any innovation can spread across the entire planet in a matter of seconds.  This means that six billion minds could be sharing – should be sharing – every tip, every insight, every brainwave and invention – so that the rest of us can have a go, see if it works, then share the results, so others can learn from our experiences. We have a platform for incredibly rapid learning, something that can springboard us into new ways of working.  It works for fishermen in India and farmers and Africa, so why not for us?

Australia is among the least sustainable nations on the planet.  Our vast per-person carbon footprint, our continual overuse of our limited water supplies, and our refusal to employ the bounty of renewable resources which nature has provided us with makes our country a bit of an embarrassment.  We have created a nation that is, in most respects, the envy of the world.  But as we have built that nation on unsustainable practice, this nation has built its house on sand, and within a generation or two, it will stand no longer.

Australia is a smart nation, intelligent and well-connected.  There’s no problem here we can not solve, no reach toward sustainability which is beyond our grasp.  We now have the tools, all we need is the compelling reason to think anew, revisiting everything we know with fresh eyes, eyes aided by many others, everywhere, and many sensors, everywhere, all helping us to understand, and from that understanding, to act, and from those actions, to learn, and from that learning, to share.

We are the sharing species; the reason we can even worry about a sustainable environment is because our sharing made us so successful that seven billion of us have begun to overwhelm the natural world.  This sharing is now opening an entirely new and unexpected realm, where we put our mobiles to our ears and put our heads together to have a good think, to share a thought, or tell a yarn.  Same as it ever was, but completely different, because this is no tribe, or small town, or neighborhood, but everybody, everywhere, all together now.  Where we go from here is entirely in our own hands.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.

Dense and Thick

I: The Golden Age

In October of 1993 I bought myself a used SPARCstation.  I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration.  I also had some ideas about coding networking protocols for shared virtual worlds.  Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet.  Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference.  I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there.  Not enough content to make it really interesting.  The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968.  Without sufficient content, hypertext systems are fundamentally uninteresting.  Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage.  To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive.  Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party.  The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing.  Over the course of the last week of October 1993, I visited every single one of those Websites.  Then I was done.  I had surfed the entire World Wide Web.  I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days.  Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites.  Yet even so, the Web had the capacity to suck you in.  I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly.  This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it.  At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party.  This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco.  People would come to perform, create, demonstrate, and spectate.  I decided I would show these people this new-fangled thing I’d become obsessed with.  So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?”  They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject.  They’d be delighted, and begin to explore.  At no point did I say, “This is the World Wide Web.”  Nor did I use the word ‘hypertext’.  I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution.  Was I responsible for that?  I’d like to think so, but I reckon San Francisco was a bit of a nexus.  I wasn’t the only one exploring the Web.  That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm?  How about you type in ‘www.hotwired.com’?”  Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online.  Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything.  I didn’t have to tell Steuer, and he didn’t have to tell me.  We knew.  And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire.  It was, and is, all things to all people.  This makes it the perfect love machine – nothing can confirm your prejudices better than the Web.  It also makes the Web a very pretty hate machine.  It is the reflector and amplifier of all things human.  We were completely unprepared, and for that reason the Web has utterly overwhelmed us.  There is no going back.  If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web.  Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide.  The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection.  The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference.  We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible.  But this age is drawing to a close.  Two recent developments will, in retrospect, be seen as the beginning of the end.  The first of these is the transformation of the oldest medium into the newest.  The book is coextensive with history, with the largest part of what we regard as human culture.  Until five hundred and fifty years ago, books were handwritten, rare and precious.  Moveable type made books a mass medium, and lit the spark of modernity.  But the book, unlike nearly every other medium, has resisted its own digitization.  This year the defenses of the book have been breached, and ones and zeroes are rushing in.  Over the next decade perhaps half or more of all books will ephemeralize,  disappearing into the ether, never to return to physical form.  That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate.  Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not.  The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us.  It presents the Web as a surface, nothing more.  iPad is a portal into the human universe, stripped of everything that is a computer.  It is emphatically not a computer.  Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come.  But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form.  But the human universe is not the whole universe.  We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture.  But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning.  We know this as one of the basic characteristics of humanness.  From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness.  Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world.  Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world.  This process actually provides the mechanism by which the world comes to make sense to us.  If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself.  But none of us can look at a thing and be completely innocent about its hidden meanings.  They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness.  For those of us not in such a blessed state, the material world has a subconscious component.  Everything means something.  Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific.  Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it.  It is always there, but rarely spoken of.  That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual.  You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years.  That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities.  Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision.  The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world.  At present, AR is flashy, but not at all useful.  It’s about to make a transition.  It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience.  These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit.  Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One:  The Book

Last year I read a wonderful book.  The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century.  By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago.  That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text.  When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization.  It wants to be digitized, because it has always been a message, encoded.  When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on.  You’d be able to make a well-briefed decision on whether this book is the right book for you.  Simple.  In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore.  Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth.  It’s this intension that the device has to support.  As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person.  My fragments will be sewn together with my threads, yours with your threads.  The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection.  The more connected everything becomes, the less likely we are prone to linearity.  We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext.  The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning.  The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial.  As such it is pulled in both directions at once.  It wants to be liberated, but will be utterly destroyed in that liberation.  The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese.  Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce.  Today I’d walk up to the meat case and throw a random package into my shopping trolley.  If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close.  I might also check to see how much fat is in the mince.  Or perhaps it’s grass-fed beef.  Or organically grown.  All of this information is offered up on the label placed on the package.  And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work.  Go through the supermarket to the distributor, through the distributor to the abattoir,  through the abattoir to farmer, through the farmer to the animal itself.  Was it healthy?  Where was it slaughtered?  Is that abattoir healthy?  (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.)  Was it fed lots of antibiotics in a feedlot?  Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince?  How much CO2 was created?  How much methane?  How much water was consumed?  These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet.  Without a system like this, it is essentially impossible.  With such a system it can potentially become easy.  As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince?  And its nutritional value?  I should be tracking those as well – or rather, my device should – so that I can maintain optimal health.  I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium.  Something should be keeping track of this.  Something that can watch and record and use that recording to build a model.  Something that can connect the real world of objects with the intangible set of goals that I have for myself.  Something that could do that would be exceptionally desirable.  It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves.  It’s an idea so simple it is completely self-evident.  We won’t need to convince anyone of this, to sell them on the truth of it.  They will simply ask, ‘When can I have it?’  But there’s more.  My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three:  Medicine

Four months ago, I contracted adult-onset chickenpox.  Which was just about as much fun as that sounds.  (And yes, since you’ve asked, I did have it as a child.  Go figure.)  Every few days I had doctors come by to make sure that I was surviving the viral infection.  While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high.  He suggested that I go on Micardis, a common medication for hypertension.  I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried.  Medicines are never perfect; they work for a certain large cohort of people.  For others they do nothing at all.  For a far smaller number, they might be toxic.  So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP.  He did not know my medical history.  He did not know the history of the other medications I had been taking.  All he knew was what he saw when he walked into my flat.  That could be a recipe for disaster.  Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems.  This is well known.  It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive.  Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption.  But that’s a model which is precisely backward.  While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves.  I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500.  Well within the range of your typical medical test.  Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs.  Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others.  Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive.  It is our interface to ourselves, and in that becomes an object of almost unimaginable importance.  In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible.  It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships.  It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network.  There are already bits and pieces of much of this in place.  It is a revolution waiting to happen.  That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III:  The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web.  He’s talking about the Semantic Web.”  And you’re right, I am talking about the Semantic Web.  But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another.  What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it.  That’s the vital difference which made the Web such a success in 1994 and 1995.  And it’s about to happen once again.

But we are starting from near zero.  Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat.  I can not.  I can not Google for the contents of my home.  There is no place to put that information, even if I had it, nor systems to put that information to work.  It is exactly like the Web in 1993: the lights on, but nobody home.  We have the capability to conceive of the world-as-a-database.  We have the capability to create that database.  We have systems which can put that database to work.  And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need.  But we have precious little connecting these three.  These are not businesses that exist yet.  We have not brought the real world into our conception of the Web.  That will have to change.  As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison.  There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple.  As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution.  I doubt it will be the integration of Wikipedia with a mobile camera.  It will be something much more immediate.  Much more concrete.  Much more useful.  Perhaps something concerned with health.  Or with managing your carbon footprint.  Those two seem the most obvious to me.  But the real revolution will probably come from a direction no one expects.  It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution.  There was nothing special about San Francisco back in 1993 and 1994.  But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web.  Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands.  You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects.  Or, you can wait for someone else to come along and do it.  Because someone inevitably will.  Every day, the pressure grows.  The real world is clamoring to crawl into cyberspace.  You can open the door.

Sharing Power (Global Edition)

My keynote for the Personal Democracy Forum, in New York.

Introduction: War is Over (if you want it)

Over the last year we have lived through a profound and perhaps epochal shift in the distribution of power. A year ago all the talk was about how to mobilize Facebook users to turn out on election day. Today we bear witness to a ‘green’ revolution, coordinated via Twitter, and participate as the Guardian UK crowdsources the engines of investigative journalism and democratic oversight to uncover the unpleasant little secrets buried in the MPs expenses scandal – secrets which the British government has done everything in its power to withhold.

We’ve turned a corner. We’re on the downward slope. It was a long, hard slog to the top – a point we obviously reached on 4 November 2008 – but now the journey is all about acceleration into a future that looks almost nothing like the past. The configuration of power has changed: its distribution, its creation, its application. The trouble with circumstances of acceleration is that they go hand-in-hand with a loss of control. At a certain point our entire global culture is liable to start hydroplaning, or worse, will go airborne. As the well-oiled wheels of culture leave the roadbed of civilization behind, we can spin the steering wheel all we want. Nothing will happen. Acceleration has its own rationale, and responds neither to reason nor desire. Force will meet force. Force is already meeting force.

What happens now, as things speed up, is a bit like what happens in the guts of CERN’s Large Hadron Collider. Different polities and institutions will smash and reveal their inner workings, like parts sprung from crashed cars. We can learn a lot – if we’re clever enough to watch these collisions as they happen. Some of these particles-in-collision will recognizably be governments or quasi-governmental organizations. Some will look nothing like them. But before we glory, Ballard-like, in the terrible beauty of the crash, we should remember that these institutions are, first and foremost, the domain of people, individuals ill-prepared for whiplash or a sudden impact with the windshield. No one is wearing a safety belt, even as things slip noticeably beyond control. Someone’s going to get hurt. That much is already clear.

What we urgently need, and do not yet have, is a political science for the 21st century. We need to understand the autopoietic formation of polities, which has been so accelerated and amplified in this era of hyperconnectivity. We need to understand the mechanisms of knowledge sharing among these polities, and how they lead to hyperintelligence. We need to understand how hyperintelligence transforms into action, and how this action spreads and replicates itself through hypermimesis. We have the words – or some of them – but we lack even an informal understanding of the ways and means. As long as this remains the case, we are subject to terrible accidents we can neither predict nor control. We can end the war between ourselves and our times. But first we must watch carefully. The collisions are mounting, and they have already revealed much. We have enough data to begin to draw a map of this wholly new territory.

I: The First Casualty of War

Last month saw an interesting and unexpected collision. Wikipedia, the encyclopedia created by and for the people, decreed that certain individuals and a certain range of IP addresses belonging to the Church of Scientology would hereafter be banned from the capability to edit Wikipedia. This directive came from the Arbitration Committee of Wikipedia, which sounds innocuous, but is in actuality the equivalent the Supreme Court in the Wikipediaverse.

It seems that for some period of time – probably stretching into years – there have been any number of ‘edit wars’ (where edits are made and reverted, then un-reverted and re-reverted, ad infinitum) around articles concerning about the Church of Scientology and certain of the personages in the Church. These pages have been subject to fierce edit wars between Church of Scientology members on one side, critics of the Church on the other, and, in the middle, Wikipedians, who attempted to referee the dispute, seeking, above all, to preserve the Neutral Point-of-View (NPOV) that the encyclopedia aspires to in every article. When this became impossible – when the Church of Scientology and its members refused to leave things alone – a consensus gradually formed within the tangled adhocracy of Wikipedia, finalized in last month’s ruling from the Arbitration Committee. For at least six months, several Church of Scientology members are banned by name, and all Church computers are banned from making edits to Wikipedia.

That would seem to be that. But it’s not. The Church of Scientology has been diligent in ensuring that the mainstream media (make no mistake, Wikipedia is now a mainstream medium) do not portray characterizations of Scientology which are unflattering to the Church. There’s no reason to believe that things will simply rest as they are now, that everyone will go off and skulk in their respective corners for six months, like children given a time-out. Indeed, the Chairman of Scientology, David Miscavidge, quickly issued a press release comparing the Wikipedians to Nazis, asking, “What’s next, will Scientologists have to wear yellow, six-pointed stars on our clothing?”

How this skirmish plays out in the months and years to come will be driven by the structure and nature of these two wildly different organizations. The Church of Scientology is the very model of a modern religious hierarchy; all power and control flows down from Chairman David Miscavidge through to the various levels of Scientology. With Wikipedia, no one can be said to be in charge. (Jimmy Wales is not in charge of Wikipedia.) The whole things chugs along as an agreement, a social contract between the parties participating in the creation and maintenance of Wikipedia. Power flows in Wikipedia are driven by participation: the more you participate, the more power you’ll have. Power is distributed laterally: every individual who edits Wikipedia has some ultimate authority.

What happens when these two organizations, so fundamentally mismatched in their structures and power flows, attempt to interact? The Church of Scientology uses lawsuits and the threat of lawsuits as a coercive technique. But Wikipedia has thus far proven immune to lawsuits. Although there is a non-profit entity behind Wikipedia, running its servers and paying for its bandwidth, that is not Wikipedia. Wikipedia is not the machines, it is not the bandwidth, it is not even the full database of articles. Wikipedia is a social agreement. It is an agreement to share what we know, for the greater good of all. How does the Church of Scientology control that? This is the question that confronts every hierarchical organization when it collides with an adhocracy. Adhocracies present no control surfaces; they are at once both entirely transparent and completely smooth.

This could all get much worse. The Church of Scientology could ‘declare war’ on Wikipedia. A general in such a conflict might work to poison the social contract which powers Wikipedia, sewing mistrust, discontent and the presumption of malice within a community that thrives on trust, consensus-building and adherence to a common vision. Striking at the root of the social contract which is the whole of Wikipedia could possibly disrupt its internal networks and dissipate the human energy which drives the project.

Were we on the other side of the conflict, running a defensive strategy, we would seek to reinforce Wikipedia’s natural strength – the social agreement. The stronger the social agreement, the less effective any organized attack will be. A strong social agreement implies a depth of social resources which can be deployed to prevent or rapidly ameliorate damage.

Although this conflict between the Church of Scientology and Wikipedia may never explode into a full-blown conflict, at some point in the future, some other organization or institution will collide with Wikipedia, and battle lines will be drawn. The whole of this quarter of the 21st century looks like an accelerating series of run-ins between hierarchical organizations and adhocracies. What happens when the hierarchies find that their usual tools of war are entirely mismatched to their opponent?

II: War is Hell

Even the collision between friendly parties, when thus mismatched, can be devastating. Rasmus Klies Nielsen, a PhD student in Columbia’s Communications program, wrote an interesting study a few months ago in which he looked at “communication overload”, which he identifies as a persistent feature of online activism. Nielsen specifically studied the 2008 Democratic Primary campaign in New York, and learned that some of the best-practices of the Obama campaign failed utterly when they encountered an energized and empowered public.

The Obama campaign encouraged voters to communicate through its website, both with one another and with the campaign’s New York staff. Although New York had been written off by the campaign (Hilary Clinton was sure to win her home state), the state still housed many very strong and vocal Obama supporters (apocryphally, all from Manhattan’s Upper West Side). These supporters flooded into the Obama campaign website for New York, drowning out the campaign itself. As election day loomed, campaign staffers retreated to “older” communication techniques – that is, mobile phones – while Obama’s supporters continued the conversation through the website. A complete disconnection between campaign and supporters occurred, even though the parties had the same goals.

Political campaigns may be chaotic, but they are also very hierarchically structured. There is an orderly flow of power from top (candidate) to bottom (voter). Each has an assigned role. When that structure is short-circuited and replaced by an adhocracy, the instrumentality of the hierarchy overloads. We haven’t yet seen the hybrid beast which can function hierarchically yet interaction with an adhocracy. At this point when the two touch, the hierarchy simply shorts out.

Another example from the Obama general election campaign illustrates this tendency for hierarchies to short out when interacting with friendly adhocracies. Project Houdini was touted as a vast, distributed GOTV program which would allow tens of thousands of field workers to keep track of who had voted and who hadn’t. Project Houdini was among the most ambitious of the online efforts of the Obama campaign, and was thoroughly tested in the days leading up to the general election. But, once election day came, Project Houdini went down almost immediately under the volley of information coming in from every quadrant of the nation, from fieldworkers thoroughly empowered to gather and report GOTV data to the campaign. A patchwork backup plan allowed the campaign to tame the torrent of data, channeling it through field offices. But the great vision of the Obama campaign, to empower the individuals with the capability to gather and report GOTV data, came crashing down, because the system simply couldn’t handle the crush of the empowered field workers.

Both of these collisions happened in ‘friendly fire’ situations, where everyone’s eyes were set on achieving the same goal. But these two systems of organization are so foreign to one another that we still haven’t seen any successful attempt to span the chasm that separates them. Instead, we see collisions and failures. The political campaigns of the future must learn how to cross that gulf. While some may wish to turn the clock back to an earlier time when campaigns respected carefully-wrought hierarchies, the electorates of the 21st century, empowered in their own right, have already come to expect that their candidate’s campaigns will meet them in that empowerment. The next decade is going to be completely hellish for politicians and campaign workers of every party as new rules and systems are worked out. There are no successful examples – yet. But circumstances are about to force a search for solutions.

III: War is Peace

As governments release the vast amounts of data held and generated by them, communities of interest are rising up to work with that data. As these communities become more knowledgeable, more intelligent – hyperintelligent – via this exposure, this hyperintelligence will translate into action: hyperempowerment. This is all well and good so long as the aims of the state are the same as the aims of the community. A community of hyperempowered citizens can achieve lofty goals in partnership with the state. But even here, the hyperempowered community faces a mismatch with the mechanisms of the state. The adhocracy by which the community thrives has no easy way to match its own mechanisms with those of the state. Even with the best intentions, every time the two touch there is the risk of catastrophic collapse. The failures of Project Houdini will be repeated, and this might lead some to argue that the opening up itself was a mistake. In fact, these catastrophes are the first sign of success. Connection is being made.

In order to avoid catastrophe, the state – and any institution which attempts to treat with a hyperintelligence – must radically reform its own mechanisms of communication. Top-down hierarchies which order power precisely can not share power with hyperintelligence. The hierarchy must open itself to a more chaotic and fundamentally less structured relationship with the hyperintelligence it has helped to foster. This is the crux of the problem, asking the leopard to change its spots. Only in transformation can hierarchy find its way into a successful relationship with hyperintelligence. But can any hierarchy change without losing its essence? Can the state – or any institution – become more flexible, fluid and dynamic while maintaining its essential qualities?

And this is the good case, the happy outcome, where everyone is pulling in the same direction. What happens when aims differ, when some hyperintelligence for some reason decides that it is antithetical to the interests of an institution or a state? We’ve seen the beginnings of this in the weird, slow war between the Church of Scientology and ANONYMOUS, a shadowy organization which coordinates its operations through a wiki. In recent weeks ANONYMOUS has also taken on the Basidj paramilitaries in Iran, and China’s internet censors. ANONYMOUS pools its information, builds hyperintelligence, and translates that hyperintelligence into hyperempowerment. Of course, they don’t use these words. ANONYMOUS is simply a creature of its times, born in an era of hyperconnectivity.

It might be more profitable to ask what happens when some group, working the data supplied at Recovery.gov or Data.gov or you-name-it.gov, learns of something that they’re opposed to, then goes to work blocking the government’s activities. In some sense, this is good old-fashioned activism, but it is amplified by the technologies now at hand. That amplification could be seen as a threat by the state; such activism could even be labeled terrorism. Even when this activism is well-intentioned, the mismatch and collision between the power of the state and any hyperempowered polities means that such mistakes will be very easy to make.

We will need to engage in a close examination of the intersection between the state and the various hyperempowered actors which rising up over next few years. Fortunately, the Obama administration, in its drive to make government data more transparent and more accessible (and thereby more likely to generate hyperintelligence around it) has provided the perfect laboratory to watch these hyperintelligences as they emerge and spread their wings. Although communication’s PhD candidates undoubtedly will be watching and taking notes, public policy-makers also should closely observe everything that happens. Since the rules of the game are changing, observation is the first most necessary step toward a rational future. Examining the pushback caused by these newly emerging communities will give us our first workable snapshot of a political science for the 21st century.

The 21st century will continue to see the emergence of powerful and hyperempowered communities. Sometimes these will challenge hierarchical organizations, such as with Wikipedia and the Church of Scientology; sometimes they will work with hierarchical organizations, as with Project Houdini; and sometimes it will be very hard to tell what the intended outcomes are. In each case the hierarchy – be it a state or an institution – will have to adapt itself into a new power role, a new sharing of power. In the past, like paired with like: states shared power with states, institutions with institutions, hierarchies with hierarchies. We are leaving this comfortable and familiar time behind, headed into a world where actors of every shape and description find themselves sufficiently hyperempowered to challenge any hierarchy. Even when they seek to work with a state or institution, they present challenges. Peace is war. In either direction, the same paradox confronts us: power must surrender power, or be overwhelmed by it. Sharing power is not an ideal of some utopian future; it’s the ground truth of our hyperconnected world.

Sharing Power (Aussie Rules)

I: Family Affairs

In the US state of North Carolina, the New York Times reports, an interesting experiment has been in progress since the first of February. The “Birds and Bees Text Line” invites teenagers with any questions relating to sex or the mysteries of dating to SMS their question to a phone number. That number connects these teenagers to an on-duty adult at the Adolescent Pregnancy Prevention Campaign. Within 24 hours, the teenager gets a reply to their text. The questions range from the run-of-the-mill – “When is a person not a virgin anymore?” – and the unusual – “If you have sex underwater do u need a condom?” – to the utterly heart-rending – “Hey, I’m preg and don’t know how 2 tell my parents. Can you help?”

The Birds and Bees Text Line is a response to the slow rise in the number of teenage pregnancies in North Carolina, which reached its lowest ebb in 2003. Teenagers – who are given state-mandated abstinence-only sex education in school – now have access to another resource, unmediated by teachers or parents, to prevent another generation of teenage pregnancies. Although it’s early days yet, the response to the program has been positive. Teenagers are using the Birds and Bees Text Line.

It is precisely because the Birds and Bees Text Line is unmediated by parental control that it has earned the ire of the more conservative elements in North Carolina. Bill Brooks, president of the North Carolina Family Policy Council, a conservative group, complained to the Times about the lack of oversight. “If I couldn’t control access to this service, I’d turn off the texting service. When it comes to the Internet, parents are advised to put blockers on their computer and keep it in a central place in the home. But kids can have access to this on their cell phones when they’re away from parental influence – and it can’t be controlled.”

If I’d stuffed words into a straw man’s mouth, I couldn’t have come up with a better summation of the situation we’re all in right now: young and old, rich and poor, liberal and conservative. There are certain points where it becomes particularly obvious, such as with the Birds and Bees Text Line, but this example simply amplifies our sense of the present as a very strange place, an undiscovered country that we’ve all suddenly been thrust into. Conservatives naturally react conservatively, seeking to preserve what has worked in the past; Bill Brooks speaks for a large cohort of people who feel increasingly lost in this bewildering present.

Let us assume, for a moment, that conservatism was in the ascendant (though this is clearly not the case in the United States, one could make a good argument that the Rudd Government is, in many ways, more conservative than its predecessor). Let us presume that Bill Brooks and the people for whom he speaks could have the Birds and Bees Text Line shut down. Would that, then, be the end of it? Would we have stuffed the genie back into the bottle? The answer, unquestionably, is no.

Everyone who has used or even heard of the Birds and Bees Text Line would be familiar with what it does and how it works. Once demonstrated, it becomes much easier to reproduce. It would be relatively straightforward to take the same functions performed by the Birds and Bees Text Line and “crowdsource” them, sharing the load across any number of dedicated volunteers who might, through some clever software, automate most of the tasks needed to distribute messages throughout the “cloud” of volunteers. Even if it took a small amount of money to setup and get going, that kind of money would be available from donors who feel that teenage sexual education is a worthwhile thing.

In other words, the same sort of engine which powers Wikipedia can be put to work across a number of different “platforms”. The power of sharing allows individuals to come together in great “clouds” of activity, and allows them to focus their activity around a single task. It could be an encyclopedia, or it could be providing reliable and judgment-free information about sexuality to teenagers. The form matters not at all: what matters is that it’s happening, all around us, everywhere throughout the world.

The cloud, this new thing, this is really what has Bill Brooks scared, because it is, quite literally, ‘out of control’. It arises naturally out of the human condition of ‘hyperconnection’. We are so much better connected than we were even a decade ago, and this connectivity breeds new capabilities. The first of these capabilities are the pooling and sharing of knowledge – or ‘hyperintelligence’. Consider: everyone who reads Wikipedia is potentially as smart as the smartest person who’s written an article in Wikipedia. Wikipedia has effectively banished ignorance born of want of knowledge. The Birds and Bees Text Line is another form of hyperintelligence, connecting adults with knowledge to teenagers in desperate need of that knowledge.

Hyperconnectivity also means that we can carefully watch one another, and learn from one another’s behaviors at the speed of light. This new capability – ‘hypermimesis’ – means that new behaviors, such as the Birds and Bees Text Line, can be seen and copied very quickly. Finally, hypermimesis means that that communities of interest can form around particular behaviors, ‘clouds’ of potential. These communities range from the mundane to the arcane, and they are everywhere online. But only recently have they discovered that they can translate their community into doing, putting hyperintelligence to work for the benefit of the community. This is the methodology of the Adolescent Pregnancy Prevention Campaign. This is the methodology of Wikipedia. This is the methodology of Wikileaks, which seeks to provide a safe place for whistle-blowers who want to share the goods on those who attempt to defraud or censor or suppress. This is the methodology of ANONYMOUS, which seeks to expose Scientology as a ridiculous cult. How many more examples need to be listed before we admit that the rules have changed, that the smooth functioning of power has been terrifically interrupted by these other forces, now powers in their own right?

II: Affairs of State

Don’t expect a revolution. We will not see masses of hyperconnected individuals, storming the Winter Palaces of power. This is not a proletarian revolt. It is, instead, rather more subtle and complex. The entire nature of power has changed, as have the burdens of power. Power has always carried with it the ‘burden of omniscience’ – that is, those at the top of the hierarchy have to possess a complete knowledge of everything of importance happening everywhere under their control. Where they lose grasp of that knowledge, that’s the space where coups, palace revolutions and popular revolts take place.

This new power that flows from the cloud of hyperconnectivity carries a different burden, the ‘burden of connection’. In order to maintain the cloud, and our presence within it, we are beholden to it. We must maintain each of the social relationships, each of the informational relationships, each of the knowledge relationships and each of the mimetic relationships within the cloud. Without that constant activity, the cloud dissipates, evaporating into nothing at all.

This is not a particularly new phenomenon; Dunbar’s Number demonstrates that we are beholden to the ‘tribe’ of our peers, the roughly 150 individuals who can find a place in our heads. In pre-civilization, the cloud was the tribe. Should the members of tribe interrupt the constant reinforcement of their social, informational, knowledge-based and mimetic relationships, the tribe would dissolve and disperse – as happens to a tribe when it grows beyond the confines of Dunbar’s Number.

In this hyperconnected era, we can pick and choose which of our human connections deserves reinforcement; the lines of that reinforcement shape the scope of our power. Studies of Japanese teenagers using mobiles and twenty-somethings on Facebook have shown that, most of the time, activity is directed toward a small circle of peers, perhaps six or seven others. This ‘co-presence’ is probably a modern echo of an ancient behavior, presumably related to the familial unit.

While we might desire to extend our power and capabilities through our networks of hyperconnections, the cost associated with such investments is very high. Time spent invested in a far-flung cloud is time that lost on networks closer to home. Yet individuals will nonetheless often dedicate themselves to some cause greater than themselves, despite the high price paid, drawn to some higher ideal.

The Obama campaign proved an interesting example of the price of connectivity. During the Democratic primary for the state of New York (which Hilary Clinton was expected to win easily), so many individuals contacted the campaign through its website that the campaign itself quickly became overloaded with the number of connections it was expected to maintain. By election day, the campaign staff in New York had retreated from the web, back to using mobiles. They had detached from the ‘cloud’ connectivity they used the web to foster, instead focusing their connectivity on the older model of the six or seven individuals in co-present connection. The enormous cloud of power which could have been put to work in New York lay dormant, unorganized, talking to itself through the Obama website, but effectively disconnected from the Obama campaign.

For each of us, connectivity carries a high price. For every organization which attempts to harness hyperconnectivity, the price is even higher. With very few exceptions, organizations are structured along hierarchical lines. Power flows from bottom to the top. Not only does this create the ‘burden of omniscience’ at the highest levels of the organization, it also fundamentally mismatches the flows of power in the cloud. When the hierarchy comes into contact with an energized cloud, the ‘discharge’ from the cloud to the hierarchy can completely overload the hierarchy. That’s the power of hyperconnectivity.

Another example from the Obama campaign demonstrates this power. Project Houdini was touted out by the Obama campaign as a system which would get the grassroots of the campaign to funnel their GOTV results into a centralized database, which could then be used to track down individuals who hadn’t voted, in order to offer them assistance in getting to their local polling station. The campaign grassroots received training in Project Houdini, when through a field test of the software and procedures, then waited for election day. On election day, Project Houdini lasted no more than 15 minutes before it crashed under the incredible number of empowered individuals who attempted to plug data into Project Houdini. Although months in the making, Project Houdini proved that a centralized and hierarchical system for campaign management couldn’t actually cope with the ‘cloud’ of grassroots organizers.

In the 21st century we now have two oppositional methods of organization: the hierarchy and the cloud. Each of them carry with them their own costs and their own strengths. Neither has yet proven to be wholly better than the other. One could make an argument that both have their own roles into the future, and that we’ll be spending a lot of time learning which works best in a given situation. What we have already learned is that these organizational types are mostly incompatible: unless very specific steps are taken, the cloud overpowers the hierarchy, or the hierarchy dissipates the cloud. We need to think about the interfaces that can connect one to the other. That’s the area that all organizations – and very specifically, non-profit organizations – will be working through in the coming years. Learning how to harness the power of the cloud will mark the difference between a modest success and overwhelming one. Yet working with the cloud will present organizational challenges of an unprecedented order. There is no way that any hierarchy can work with a cloud without becoming fundamentally changed by the experience.

III: Affair de Coeur

All organizations are now confronted with two utterly divergent methodologies for organizing their activities: the tower and the cloud. The tower seeks to organize everything in hierarchies, control information flows, and keep the power heading from bottom to top. The cloud isn’t formally organized, pools its information resources, and has no center of power. Despite all of its obvious weaknesses, the cloud can still transform itself into a formidable power, capable of overwhelming the tower. To push the metaphor a little further, the cloud can become a storm.

How does this happen? What is it that turns a cloud into a storm? Jimmy Wales has said that the success of any language-variant version of Wikipedia comes down to the dedicated efforts of five individuals. Once he spies those five individuals hard at work in Pashtun or Khazak or Xhosa, he knows that edition of Wikipedia will become a success. In other words, five people have to take the lead, leading everyone else in the cloud with their dedication, their selflessness, and their openness. This number probably holds true in a cloud of any sort – find five like-minded individuals, and the transformation from cloud to storm will begin.

At the end of that transformation there is still no hierarchy. There are, instead, concentric circles of involvement. At the innermost, those five or more incredibly dedicated individuals; then a larger circle of a greater number, who work with that inner five as time and opportunity allow; and so on, outward, at decreasing levels of involvement, until we reach those who simply contribute a word or a grammatical change, and have no real connection with the inner circle, except in commonality of purpose. This is the model for Wikipedia, for Wikileaks, and for ANONYMOUS. This is the cloud model, fully actualized as a storm. At this point the storm can challenge any tower.

But the storm doesn’t have things all its own way; to present a challenge to a tower is to invite the full presentation of its own power, which is very rude, very physical, and potentially very deadly. Wikipedians at work on the Farsi version of the encyclopedia face arrest and persecution by Iran’s Revolutionary Guards and religious police. Just a few weeks ago, after the contents of the Australian government’s internet blacklist was posted to Wikileaks, the German government invaded the home of the man who owns the domain name for Wikileaks in Germany. The tower still controls most of the power apparatus in the world, and that power can be used to squeeze any potential competitors.

But what happens when you try to squeeze a cloud? Effectively, nothing at all. Wikipedia has no head to decapitate. Jimmy Wales is an effective cheerleader and face for the press, but his presence isn’t strictly necessary. There are over 2000 Wikipedians who handle the day-to-day work. Locking all of them away, while possible, would only encourage further development in the cloud, as other individuals moved to fill their places. Moreover, any attempt to disrupt the cloud only makes the cloud more resilient. This has been demonstrated conclusively from the evolution of ‘darknets’, private file-sharing networks, which grew up as the legal and widely available file-sharing networks, such as Napster, were shut down by the copyright owners. Attacks on the cloud only improve the networks within the cloud, only make the leaders more dedicated, only increase the information and knowledge sharing within the cloud. Trying to disperse a storm only intensifies it.

These are not idle speculations; the tower will seek to contain the storm by any means necessary. The 21st century will increasingly look like a series of collisions between towers and storms. Each time the storm emerges triumphant, the tower will become more radical and determined in its efforts to disperse the storm, which will only result in a more energized and intensified storm. This is not a game that the tower can win by fighting. Only by opening up and adjusting itself to the structure of the cloud can the tower find any way forward.

What, then, is leadership in the cloud? It is not like leadership in the tower. It is not a position wrought from power, but authority in its other, and more primary meaning, ‘to be the master of’. Authority in the cloud is drawn from dedication, or, to use rather more precise language, love. Love is what holds the cloud together. People are attracted to the cloud because they are in love with the aim of the cloud. The cloud truly is an affair of the heart, and these affairs of the heart will be the engines that drive 21st century business, politics and community.

Author and pundit Clay Shirky has stated, “The internet is better at stopping things than starting them.” I reckon he’s wrong there: the internet is very good at starting things that stop things. But it is very good at starting things. Making the jump from an amorphous cloud of potentiality to a forceful storm requires the love of just five people. That’s not much to ask. If you can’t get that many people in love with your cause, it may not be worth pursing.

Conclusion: Managing Your Affairs

All 21st century organizations need to recognize and adapt to the power of the cloud. It’s either that or face a death of a thousand cuts, the slow ebbing of power away from hierarchically-structured organizations as newer forms of organization supplant them. But it need not be this way. It need not be an either/or choice. It could be a future of and-and-and, where both forms continue to co-exist peacefully. But that will only come to pass if hierarchies recognize the power of the cloud.

This means you.

All of you have your own hierarchical organizations – because that’s how organizations have always been run. Yet each of you are surrounded by your own clouds: community organizations (both in the real world and online), bulletin boards, blogs, and all of the other Web2.0 supports for the sharing of connectivity, information, knowledge and power. You are already halfway invested in the cloud, whether or not you realize it. And that’s also true for people you serve, your customers and clients and interest groups. You can’t simply ignore the cloud.

How then should organizations proceed?

First recommendation: do not be scared of the cloud. It might be some time before you can come to love the cloud, or even trust it, but you must at least move to a place where you are not frightened by a constituency which uses the cloud to assert its own empowerment. Reacting out of fright will only lead to an arms race, a series of escalations where the your hierarchy attempts to contain the cloud, and the cloud – which is faster, smarter and more agile than you can ever hope to be – outwits you, again and again.

Second: like likes like. If you can permute your organization so that it looks more like the cloud, you’ll have an easier time working with the cloud. Case in point: because of ‘message discipline’, only a very few people are allowed to speak for an organization. Yet, because of the exponential growth in connectivity and Web2.0 technologies, everyone in your organization has more opportunities to speak for your organization than ever before. Can you release control over message discipline, and empower your organization to speak for itself, from any point of contact? Yes, this sounds dangerous, and yes, there are some dangers involved, but the cloud wants to be spoken to authentically, and authenticity has many competing voices, not a single monolithic tone.

Third, and finally, remember that we are all involved in a growth process. The cloud of last year is not the cloud of next year. The answers that satisfied a year ago are not the same answers that will satisfy a year from now. We are all booting up very quickly into an alternative form of social organization which is only just now spreading its wings and testing its worth. Beginnings are delicate times. The future will be shaped by actions in the present. This means there are enormous opportunities to extend the capabilities of existing organizations, simply by harnessing them to the changes underway. It also means that tragedies await those who fight the tide of times too single-mindedly. Our culture has already rounded the corner, and made the transition to the cloud. It remains to be seen which of our institutions and organizations can adapt themselves, and find their way forward into sharing power.

Digital Citizenship LIVE

Keynote for the Digital Fair of the Australian College of Educators, Geelong Grammar School, 16 April 2009. The full text of the talk is here.