The Social Sense

I: On Top of the World

WebEarth.org image

I’ve always wanted to save the world.  When I was younger, and more messianic, I thought I might have to do it all myself.  As the world knocked sense into me, I began to see salvation as a shared project, a communal task.  I have always had a special vision for that project, one that came to me when I first started working in virtual reality, twenty years ago.  I knew that it would someday be possible for us to ‘see’ the entire world, to apprehend it as a whole.

Virtual reality, and computer visualization in general, is very good at revealing things that we can’t normally see, either because they’re too big, or we’re too large, or they’re too fast, or we’re too quick.  The problem of scale is one at the center of human being: man is the measure of all things.  But where that measuring rod falls short, leaving us unable to apprehend the totality of experience, we live in shadow, part of the truth forever beyond our grasp.

The computer has become microscope, telescope, ultra-high-speed and time-lapse camera.  Using little more than a sharpened needle, we can build atomic-force microscopes, feeling our way across the edges of individual atoms.  Using banks of supercomputers, we crunch through microwave data, painting a picture of the universe in its first microseconds.  We can simulate chemical reactions so fast we had always assumed them to be instantaneous.  And we can speed the ever-so-gradual movement of the continents, making them seem like a dance.

Twenty years ago, when this was more theoretical than commonplace, I realized that we would someday have systems to show us the Earth, just as it is, right in this moment.  I did what I could with the tools I had at my disposal to create something that pointed toward what I imagined, but I have this persistent habit of being ahead of the curve.  What I created – WebEarth – was a dim reflection of what I knew would one day be possible.

In the middle of 1995 I was invited to be a guest of honor at the Interactive Media Festival in Los Angeles.  The festival showcased a number of very high-end interactive projects, including experiments in digital evolution, artificial life, and one project that stopped me in my tracks, a work that changed everything for me.

On 140cm television screen, I saw a visualization of Earth from space.  Next to the screen, I saw a trackball – inflated to the size of a beachball.  I put my hand on the trackball and spun it around; the Earth visualization followed it, move for move.  That’s nice, I thought, but not really terrifically interesting.  There was a little console with a few buttons arrayed off to one side of the trackball.  When you pressed one of those buttons, you began to zoom in.  Nothing special there, but as you zoomed in, the image began to resolve itself, growing progressively more detailed as you dived down from outside the orbit of the Moon, landing at street level in Berlin, or Tokyo, or Los Angeles.

This was T_Vision, and if it all sounds somewhat unexceptional today, sixteen years ago it took a half-million-dollar graphics supercomputer to create the imagery drawn across that gigantic display, and a high-speed network link to keep it fed with all the real-time data integrated into its visualizations.  T_Vision could show you weather information from anywhere it had been installed, because each installation spoke to the others across the still-new-and-shiny Internet, sharing local data.  The goal was to have T_Vision installations in all of the major cities around the world, so that any T_Vision would be able to render a complete picture of the entire Earth, at it is, in the moment.

That never happened; half a million dollars per city was too big an ask.  But I knew that I’d seen my vision realized in T_Vision, and I expected that it would become the prototype for systems to follow.  I wrote about T_Vision in my book The Playful World, because I knew that these simulations of Earth would be profoundly important in the 21st century: they provide an ideal tool for understanding the impacts of our behavior.

Our biggest problems arise when we fail to foresee the long-term consequences of our actions.  Native Americans once considered ‘the seventh generation’ when meditating on their actions, but long-term planning is difficult in a world of every-increasing human complexity.  So much depends on so much, everything interwoven into everything else, it almost seems as though we only have two options: frozen in a static moment which admits no growth, or, blithely ignorant, charging ahead, and devil take the hindmost.

Two options, until today.  Because today we can pop Google Earth onto our computers or our mobiles and zoom down from space to the waters of Lake Crackenback.  We can integrate cloud cover and radar and rainfall.  And we can do this all on computers that cost just a few hundreds of dollars, connected to a global Internet with sensors near and far, bringing us every bit of data we might desire.

We have this today, but we live in the brief moment between the lightning and the thunder.  The tool has been given to us, but we have not yet learned how to use it, or what its use will mean.  This is where I want to begin today, because this is a truly new thing: we can see ourselves and our place within the world.  We were blind, but now can see.  In this light we can put to rights the mistakes we made while we lived in darkness.

 

II: All Together Now

A lot has transpired in the past sixteen years.  Computers double in speed or halve in cost every twenty-four months, so the computers of 2011 are a fifty times faster, and cost, in relative terms, a quarter the price.  Nearly everyone uses them in the office, and most homes have at least one, more often than not connected to high-speed broadband Internet, something that didn’t exist sixteen years ago.  Although this is all wonderful and has made modern life a lot more interesting, it’s nothing next to the real revolution that’s taken place.

In 1995, perhaps fifteen or twenty percent of Australians owned mobiles.  They were bulky, expensive to own, expensive to use, yet we couldn’t get enough of them.  By the time of my first visit to Australia, in 1997, just over half of all Australians owned mobiles.  A culture undergoes a bit of a sea-change when mobiles pass this tipping point.  This was proven during an evening I’d organized with friends at Sydney’s Darling Harbour.  Half of us met at the appointed place and time, the rest were nowhere to be found.  We could have waited them to arrive, or we could have gone off on our own, fragmenting the party.  Instead we called, and told them to meet us at a pub on Oxford Street.  Problem solved.  It’s this simple social lubrication (no one is late anymore, just delayed) which makes mobiles intensely desirable.

In 2011, the mobile subscription rate in Australia is greater than 115%.  This figure seems ridiculous until you account for the number of individuals who have more than one mobile (one for work and one for personal use), or some other device – such as an iPad – that connects to wireless 3G broadband.  Children don’t get their first mobile until around grade 3 (or later), and a lot of seniors have skipped the mobile entirely.  But the broad swath of the population between 8 and 80 all have a mobile or two, and more.

Life in Australia is better for the mobile, but doesn’t hold a candle to its impact in the developing world.  From fishermen on the Kerala coast of India, to vegetable farmers in Kenya, to barbers in Pakistan, the mobile creates opportunities for every individual connected through it, opportunities which quickly translate into economic advantage.  Economists have definitively established a strong correlation between the aggregate connectivity of nation and its growth.  Connected individuals earn more; so do connected nations.

Because the mobile means money, people have eagerly adopted it.  This is the real transformation over the last sixteen years.  Over that time we went from less than a hundred million mobile subscribers to somewhere in the range of six billion.  There’s just under seven billion people on Earth, and even accounting for those of us who have more than one subscription, this means three quarters all of humanity Earth now use a mobile.  As in Australia, the youngest and the very oldest are exempt, but as we become a more urban civilization – over half of us now live in cities – the pace and coordination of urban life is set by the mobile.

 

III:  I, Spy

The lost iPad, found

We live in a world of mobile devices.  They’re in hand, tucked in a pocket, or tossed into a handbag, but sometimes we leave them behind.  At the end of long business trip, on a late night flight back to Sydney, I left my iPad in the seatback pocket of an aircraft.  I didn’t discover this for eighteen hours, until I unpacked my bags and noted it had gone missing.  “Well, that’s it,” I thought.  “It’s gone for good.”  Then I remembered that Apple offers a feature on their iPhones and iPads, through their Me.com website, that lets you locate lost devices.  I figured I had nothing to lose, so I launched the site, waited a few moments, then found my iPad.  Not just the city, or the suburb, but down to the neighborhood and street and house – even the part of the house!  There it was, on Google’s high-resolution satellite imagery, phoning home.

What to do?  The neighborhood wasn’t all that good – next to Mount Druitt in Sydney’s ‘Wild West’ – so I didn’t fancy ringing the bell and asking politely.  Instead I phoned the police, who came by to take a report.  When they asked how I knew where my iPad was, I showed them the website.  They were gobsmacked.  In their perfect world, no thief can ever make away with anything, because it’s telling its owner and the police about its every movement.

I used another feature of ‘Find my iPad’ to send a message to its display: “Hello, I’m lost!  Please return me for a reward.’  About 36 hours later I received an email from the fellow who had ended up with my iPad (his mother cleans aircraft), offering to return it.  The next day, in a scene straight from a Cold War-era spy movie, we met on a street corner in Ultimo.  He handed me my iPad, I thanked him and handed him a reward, then we each went our separate ways.

Somewhere in the middle of this drama, I realized that I possessed the first of what will be many intelligent and trackable devices to follow.  In the beginning they’ll look like mobiles, like tablets and computers, but they’ll begin to look like absolutely anything you like.  This is the kind of high-technology favored by ‘Q’ in James Bond movies and by the CIA in covert operations, but it has always been expensive.  Now it’s cheap and easy-to-use and tiny.

I tend to invent things after I have that kind of brainwave, so I immediately dreamed up a ‘smart’ luggage tag, that you’d clip onto your baggage when you check in at the terminal.  If your baggage gets lost, it can ‘phone home’ to let you know just where it’s ended up – information you can give to your airline.  Or you can put one into your car, so you can figure out just where you left it in that vast parking lot.  Or hang one onto your child as you go out into a crowded public place.  A group of very smart Sydney engineers had already shown me something similar – Tingo Family – which uses the tracking capabilities of smartphones to create that sort of capability.  But smartphones are expensive, and overkill; couldn’t this cost a lot less?

I did some research on my favorite geek websites, and found that I could build something similar from off-the-shelf parts for about $150.  That sounds expensive, but that’s because I’m purchasing in single-unit quantities.  When you purchase 10,000 of something electronic, they don’t cost nearly as much.  I’m sure something could be put together for less than fifty dollars that would have the two necessary components: a GPS receiver, and a 3GSM mobile broadband connection.  With those two pieces, it becomes possible to track anything, anywhere you can get a signal – which, in 2011, is most of the planet.

To track something – and talk to it – costs fifty dollars today, but, like clockwork, every twenty-four months that cost falls by fifty percent.  In 2013, it’s $25.00, in 2015 it’s $12.50, and so on, so that ten years from now it’s only a bit more than a dollar.  Eventually it becomes almost free.

This is the world we will be living in.  Anything of any importance to us – whether expensive or cheap as chips – will be sensing, listening, and responding.  Everything will be aware of where it is, and where it should be.  Everything will be aware of the temperature, the humidity, the light level, the altitude, its energy consumption, and the other things around it which are also aware of the temperature, humidity, light level, altitude, energy consumption, and other things around them.

This is the sensor revolution, which is sometimes called ‘the Web of things’ or ‘Web3.0’.  We can see it coming, even if we can’t quite see what happens once it comes.  We didn’t understand that mobiles would help poor people earn more money until everyone, everywhere got a mobile.  These things aren’t easy to predict in advance, because they are the product of complex interactions between people and circumstances.  Even so, we can start to see how all of this information provided by our things feeds into our most innate human characteristic – the need to share.

 

IV: Overshare

Last Thursday I was invited to the launch of the ‘Imagine Cup’, a Microsoft-sponsored contest where students around the world use technology to develop solutions for the big problems facing us.  At the event I met the winners of the 2008 Imagine Cup, two Australians – Ed Hooper and Long Zheng.  They told me about their winning entry, Project SOAK.  That stands for Smart Operational Agriculture Kit.  It’s essentially a package of networked sensors and software that a farmer can use to know precisely when land needs water, and where.  Developed in the heart of the drought, Project SOAK is an innovative answer to the permanent Australian problem of water conservation.

I asked them how much these sensors cost, back in 2008.  To measure temperature, rainfall, dam depth, humidity, salinity and moisture would have cost around fifty dollars.  Fifty dollars in 2008 is about one dollar in 2020.  At that price point, a large farm, with thousands of hectares, could be covered with SOAK sensors for just a few tens of thousands of dollars, but would save the farmer water, time, and money for many years to come.  The farmer would be able to spread eyes over all of their land, and the computer, eternally vigilant, would help the farmer grind through the mostly-boring data spat out by these thousands of eyes.

That’s a snapshot of the world of 2020, a snapshot that will be repeated countless times, as sensors proliferate throughout every part of our planet touched by human beings: our land and our cities and our vehicles and our bodies.  Everything will have something listening, watching, reporting and responding.

We can already do this, even without all of this cheap sensing, because our connectivity creates a platform where we as ‘human sensors’ can share the results of our observations.  Just a few weeks ago, a web-based project known as ‘Safecast’ launched.  Dedicated to observing and recording radiation levels around the Fukushima nuclear reactor – which melted down following the March 11 2011 earthquake and tsunami – Safecast invites individuals throughout Japan to take regular readings of the ‘background’ radiation, then post them to the Safecast website.  These results are ‘mashed up’ with Google Maps, and presented for anyone to explore, both as current results, and as a historical path of radiation levels through time in a particular area.

Safecast exists because the Japanese government has failed to provide this information to its own people (perhaps to avoid unduly alarming them), filling a gap in public knowledge by ‘crowdsourcing’ the sensing task across thousands of willing participants.  People, armed with radiation dosimeters and Geiger counters, are the sensors.  People, typing their observations into computers, are the network.  Everything that we will soon be able to do automatically we can already do by hand, if there is sufficient need.

Necessity is the mother of invention; need is the driver for innovation.  In Japan they collect data about soil and water radiation, to save themselves from cancer.  In the United States, human sensors collect data about RBT checkpoints, to save themselves from arrest.  You can purchase a smartphone app that allows anyone to post the location of an RBT checkpoint to a crowdsourced database.  Anyone else with the app can launch it and see how to avoid being caught drink driving.  Although we may find the morality disagreeable, the need is there, and an army of human sensors set to work to meet that need.

Now that we’re all connected, we’ve found that connectivity is more than just keeping in touch with family, friends and co-workers.  It brings an expanded awareness, as each of us shares the points of interest peculiar to our tastes.  In the beginning, we shared bad jokes, cute pictures of kittens, and chain letters.  But we’ve grown up, and as we’ve matured, our sharing has taken on a focus and depth that gives it real power: people share what they know to fill the articles of Wikipedia, read their counters and plug results into Safecast, spot the coppers and share that around too – as they did in the central London riots in February.

It’s uncontrollable, it’s ungovernable, but all this sharing serves a need.  This is all human potential that’s been bottled up, constrained by the lack of connectivity across the planet.  Now that this barrier is well and truly down, we have unprecedented capability to pool our eyes, ears and hands, putting ourselves to work toward whatever ends we might consider appropriate.

Let’s give that some thought.

 

V:  Mother Birth

To recap: six billion of us now have mobiles, keeping us in close connection with one another.  This connectivity creates a platform for whatever endeavors we might choose to pursue, from the meaningless, to the momentary, to the significant and permanent.  We are human sensors, ready to observe and report upon anything we find important; chances are that if we find something important, others will as well.

All of that human activity is colliding head-on with the sensor revolution, as electronics become smaller and smarter, leading eventually to a predicted ‘smart dust’ where sensors become a ubiquitous feature of the environment.  We are about to gain a certain quality of omnipresence; where our sensors are, our minds will follow.   We are everywhere connected, and soon will be everywhere aware.

This awareness grants us the ability to see the consequences of our activities.  We can understand why burning or digging or watering here has an effect there, because, even in a complex ecosystem, we can trace the delicate connections that outline our actions.  The computer, with its infinitely patient and infinitely deep memory, is an important partner in this task, because it helps us to detect and illustrate the correlations that become a new and broader understanding of ourselves.

This is not something restricted to the biggest and grandest challenges facing us.  It begins more humbly and approachably with the minutiae of every day life: driving the car, using the dishwasher, or organizing a ski trip.  These activities no longer exist in isolation, but are recorded and measured and compared: could that drive be shorter, that wash cooler, that ski trip more sustainable?  This transition is being driven less by altruism than by economics.  Global sustainability means preserving the planet, but individual sustainability means a higher quality of life with lower resource utilization.  As that point becomes clear – and once there is sufficient awareness infrastructure to support it – sustainability becomes another ‘on tap’ feature of the environment, much as electricity and connectivity are today.

This will not be driven by top-down mandates.  Although our government is making moves toward sustainability, market forces will drive us to sustainability as the elements of the environment become continually more precious.  Intelligence is a fair substitute for almost any other resource – up to a point.  A car won’t run on IQ alone, but it will go a lot further on a tank of petrol if intelligently designed.

We can do more than act as sensors and share data:  we can share our ideas, our frameworks and solutions for sustainability.  We have the connectivity – any innovation can spread across the entire planet in a matter of seconds.  This means that six billion minds could be sharing – should be sharing – every tip, every insight, every brainwave and invention – so that the rest of us can have a go, see if it works, then share the results, so others can learn from our experiences. We have a platform for incredibly rapid learning, something that can springboard us into new ways of working.  It works for fishermen in India and farmers and Africa, so why not for us?

Australia is among the least sustainable nations on the planet.  Our vast per-person carbon footprint, our continual overuse of our limited water supplies, and our refusal to employ the bounty of renewable resources which nature has provided us with makes our country a bit of an embarrassment.  We have created a nation that is, in most respects, the envy of the world.  But as we have built that nation on unsustainable practice, this nation has built its house on sand, and within a generation or two, it will stand no longer.

Australia is a smart nation, intelligent and well-connected.  There’s no problem here we can not solve, no reach toward sustainability which is beyond our grasp.  We now have the tools, all we need is the compelling reason to think anew, revisiting everything we know with fresh eyes, eyes aided by many others, everywhere, and many sensors, everywhere, all helping us to understand, and from that understanding, to act, and from those actions, to learn, and from that learning, to share.

We are the sharing species; the reason we can even worry about a sustainable environment is because our sharing made us so successful that seven billion of us have begun to overwhelm the natural world.  This sharing is now opening an entirely new and unexpected realm, where we put our mobiles to our ears and put our heads together to have a good think, to share a thought, or tell a yarn.  Same as it ever was, but completely different, because this is no tribe, or small town, or neighborhood, but everybody, everywhere, all together now.  Where we go from here is entirely in our own hands.

People Power

Introduction: Magic Pudding

To effect change within governmental institutions, you need to be conscious of two important limits.  First, resources are always at a premium; you need to work within the means provided.  Second, regulatory change is difficult and takes time.  When these limitations are put together, you realize that you’ve been asked to cook up a ‘magic pudding’.  How do you work this magic?  How do you deliver more for less without sacrificing quality?

In any situation where you are being asked to economize, the first and most necessary step is to conduct an inventory of existing assets.  Once you know what you’ve got, you gain an insight into how these resources could be redeployed.  On some occasions, that inventory returns surprising results.

There’s a famous example, from thirty years ago, involving Disney.  At that time, Disney was a nearly-bankrupt family entertainment company.  Few went to see their films; the firm’s only substantial income came from its theme parks and character licensing.  In desperation, Disney’s directors brought on Michael J. Eisner as CEO.  Would Eisner need to sell Disney at a rock-bottom price to another entertainment company, or could it survive as an independent firm? First things first: Eisner sent his right-hand man, Frank Wells, off to do an inventory of the company’s assets.  There’s a vault at Disney, where they keep the master prints of all of the studio’s landmark films: Snow White and the Seven Dwarves, Pinocchio, Peter Pan, Bambi, A Hundred and One Dalmatians, The Jungle Book, and so on.  When Wells walked into the Vault, he couldn’t believe his eyes.  Every few minutes he called Eisner at his desk to report, “I’ve just found another hundred million dollars.”

Disney had the best library of family films created by any studio – but kept them locked away, releasing them theatrically at multi-year intervals designed to keep them fresh for another generation of children.  That worked for forty years, but by the mid-1980s, with the VCR moving into American homes, Eisner knew more money could be made by taking these prize assets and selling them to every family in the nation – then the world.  That rediscovery of locked-away assets was the beginning of the modern Disney, today the most powerful entertainment brand on the planet.

When I began to draft this essay, I felt as constrained as Disney, pre-Eisner.  How do you bake a magic pudding?  Eventually, I realized that we actually have incredible assets at our disposal, ones which didn’t exist just a few years ago. Let’s go on a tour of this hidden vault.  What we now have available to us, once we learn how to use it, will change everything about the way we work, and the effectiveness of our work.

 

I: What’s Your Number?

The latest surveys put the mobile subscription rate in Australia between 110-115%.  Clearly, this figure is a bit misleading: we don’t give children mobiles until they’re around eight years old, nor the most senior of seniors own them in overwhelming numbers.  The vast middle, from eight to eighty, do have mobiles.  Many of us have more than one mobile – or some other device, like an iPad, which uses a mobile connection for wireless data.  This all adds up.  Perhaps one adult in fifty refuses to carry a mobile around with them most of the time, so out of a population of nearly 23 million, we have about 24 million mobile subscribers.

This all happened in an instant; mobile ownership was below 10% in 1993, but by 1997 Australia had passed 50% saturation.  We never looked back.  Today, everyone has a number – at least one number – where they can be reached, all the time.  Although Australia has had telephones for well over a hundred years, a mobile is a completely different sort of device.

A landline connects you to a place: you ring a number to a specific telephone in a specific location.  A mobile connects you to a person. On those rare occasions when someone other than a mobile’s owner answers it, we experience a moment of great confusion.  Something is deeply disturbing about this, a bit like body-snatching.  The mobile is the person; the person is the mobile. When we forget the mobile at home – rushed or tired or temporarily misplaced – we feel considerably more vulnerable.

The mobile is the lifeline which connects us into our community: our family, our friends, our co-workers.  This lifeline is pervasive and continuous.  All of us are ‘on call’ these days, although nearly all of the time this feels more like a relief than a burden.  When the phone rings at odd hours, it’s not the boss, but a friend or family member who needs some help.  Because we’re continuously connected, that help is always there, just ten digits away. We’ve become very attached to our mobiles, not in themselves, but because they represent assistance in its purest form.

As a consequence, we are away from our mobiles less and less; they spend the night charging on our bedstands, and the days in our pockets or purses.

Last year, a young woman approached me after a talk, and said that she couldn’t wait until she could have her mobile implanted beneath her skin, becoming a part of her.  I asked her how that would be any different than the world we live in today.

This is life in modern Australia, and we’re not given to think about it much, except when we ponder whether we should be texting while we drive, or feel guilty about checking emails when we should really be listening to our partner.  This constant connectivity forms a huge feature of the landscape, a gravitational body which gently lures us toward it.

This connectivity creates a platform – just like a computer’s operating system – for running applications.  These applications aren’t software, they’re ‘peopleware’.  For example, fishermen off of India’s Kerala coast call around before they head into port, looking for the markets most in need of their catch.  Farmers in Kenya make inquiries to their local markets, looking for the best price for their vegetables. Barbers in Pakistan post a sign with their mobile number, buy a bicycle, and go clipper their clients in their homes.  The developing world has latched onto the mobile because it makes commerce fluid, efficient, and much more profitable.

If the mobile does that in India and Kenya and Pakistan, why wouldn’t it do the same thing for us, here in Australia?  It does lubricate our social interactions: no one is late anymore, just delayed.  But we haven’t used the platform to build any applications to leverage the brand-new fact of our constant connectivity.  We can give ourselves a pass, because we’ve only just gotten here.  But now that we are here, we need to think hard about how to use what we’ve got.  This is our hundred-million dollar moment.

 

II: Sharing is Daring

A few years ago, while I waited at the gate for a delayed flight out of San Francisco International Airport, I grew captivated with the information screens mounted above the check-in desks.  They provided a wealth of information that wasn’t available from airline personnel; as my flight changed gates and aircraft, I learned of this by watching the screen.  At one point, I took my mobile out of my pocket and snapped a photo of the screen, sharing the photo with my friends, so they could know all about my flying troubles.  After I’d shot a second photo, a woman approached me, and carefully explained that she was talking to another passenger on our delayed flight, a woman who worked for the US Government, and that this government employee thought my actions looked very suspicious.

Taking photos in an airport is cause for alarm in some quarters.

After I got over my consternation and surprise, I realized that this paranoid bureaucrat had a point. With my mobile, I was breaching the security cordon carefully strung around America’s airports.  It pierced the veil of security which hid the airport from the view of all except those who had been carefully screened.  We see this same sensitivity at the Immigration and Customs facilities at any Australian airport – numerous signs inform you that you’re not allowed to use your mobile.  Communication is dangerous.  Connecting is forbidden.

We tend to forget that sharing information is a powerful act, because it’s so much a part of our essential nature as human beings.

In November, Wikileaks shared a massive store of information previously held by the US State Department; just one among a quarter million cables touched off a revolt in Tunisia, leading to revolutions in Egypt, Bahrain, Yemen, Libya, Syria and Jordan.  Sharing changes the world.  Actually, sharing is the foundation of the human world.  From the moment we are born, we learn about the world because everyone around us shares with us what they know.

Suddenly, there are no boundaries on our sharing.  All of us, everywhere – nearly six billion of us – are only a string of numbers away.  Type them in, wait for an answer, then share anything at all.  And we do this.  We call our family to tell them we’re ok, our friends to share a joke, and our co-workers to keep coordinated.  We’ve achieved a tremendously expanded awareness and flexibility that’s almost entirely independent of distance.  That’s the truth at the core of this hundred-million dollar moment.

All of your clients, all of your patients, all of your stakeholders – and all of you – are all unbelievably well connected.  By the standards of just a generation ago, we are all continuously available.  Yet we still organize our departments and deliver our services as if everyone were impossibly far-flung, hardly ever in contact.

Still, the world is already busy, reorganizing itself to take advantage of all this hyperconnectivity.

I’ve already mentioned the fishermen and the farmers, but as I write this, I’ve just read an article titled “US Senators call for takedown of iPhone apps that locate DUI (RBT) checkpoints.”  You can buy a smartphone app which allows you to report on a checkpoint, posting that report to a map which others can access through the app.  You could conceivably evade the long arm of the law with such an app, drink driving around every checkpoint with ease.

Banning an app like this simply won’t work. There are too many ways to do this, from text messages to voice mail to Google Maps to smartphone apps.  There’s no way to shut them all down.  If the Senate passes a law to prevent this sort of thing – and they certainly will try – they’ll find that they’ve simply moved all of this connectivity underground, into ‘darknets’ which invisibly evade detection.

This is how potent sharing can be.  We all want to share.  We have a universal platform for sharing.  We must decide what we will share.  When people get onto email for the first time, they tend to bombard their friends and family with an endless stream of bad jokes and cute photographs of kittens and horribly dramatic chain letters.  Eventually they’ll back off a bit – either because they’ve learned some etiquette, or because a loved one has told them to buzz off.

You also witness that exuberant sharing in teenagers, who send and receive five hundred text messages a day.  When this phenomenon was spotted, in Tokyo, a decade ago, many thought it was simply a feature peculiar to the Japanese.  Today, everywhere in the developed world, young people send a constant stream of messages which generally say very little at all.  For them, it’s not important what you share; what is important is that you share it.  You are the connections, you are the sharing.

That’s great for the young – some have suggested that it’s an analogue to the ‘grooming’ behavior we see in chimpanzees – but we can wish for more than a steady stream of ‘hey’ and ‘where r u?’  We can share something substantial and meaningful, something salient.

That salience could be news of the nearest RBT checkpoint, or, rather more helpfully, it might be a daily audio recording of the breathing of someone suffering with Chronic Obstructive Pulmonary Disease.  It turns out that just a few minutes listening to the sufferer – at home, in front of a computer, or, presumably their smartphone – will cut their hospitalizations in half, because smaller problems can be diagnosed and treated before they become life-threatening.  A trial in Tasmania demonstrated this conclusively; it’s clear that using this connection to listen to the patient can save lives, dollars, and precious time.

This is the magic pudding, the endless something from nothing.  But nothing is ever truly free.  There is a price to be paid to realize the bounty of connectivity.  Our organizations and relations are not structured to advantage themselves in this new environment, and although it costs no money and requires no changes to the law, transforming our expectations of our institutions – and of one another – will not be easy.

 

III:  Practice Makes Perfect

To recap: Everyone is connected, everyone has a mobile, everyone uses them to maintain continuous connections with the people in their lives.  This brand-new hyperconnectivity provides a platform for applications.

The first and most natural application of connectivity is sharing, an activity beginning with the broad and unfocused, but moves to the specific and salient as we mature in our use of the medium.  This maturation is both individual and institutional, though at the present time individuals greatly outpace any institution in both their agility with and understanding of these new tools.

Our lives online are divided into two separate but unequal spheres; this is a fundamental dissonance of our era.  Teenagers send hundreds of text messages a day, aping their parents, who furiously respond to emails sent to their mobiles while posting Twitter updates.  But all of this is happening outside the institution,  or, in a best practice scenario, serves to reinforce the existing functionality of the institution.  We have not rethought the institution – how it works, how it faces its stakeholders and serves its clients – in the light of hyperconnectivity.

This seems too alien to contemplate – even though we are now the aliens.  We live in a world of continuous connection; it’s only when we enter the office that we temper this connection, constraining it to meet the needs of organizational process.

If we can develop techniques to bring hyperconnectivity into the organization, to harness it institutionally, we can bake that magic pudding.  Hyperconnectivity provides vastly greater capability at no additional cost.  It’s an answer to the problem.  It requires no deployment, no hardware, no budgeting or legislative mandates.  It only requires that we more fully utilize everything we’ve already got.

To do that, we must rethink everything we do.

Service delivery in health is something that is notoriously not scalable.  You must throw more people at a service to get more results.  All the technology and process management in the world won’t get you very far.  You can make systems more efficient, but you can’t make them radically more effective.  This has become such a truism in the health care sector that technology has become almost an ironic punchline within the field.  So much was promised, and so much of it consistently under-delivered, that most have become somewhat cynical.

There are no magic wands to wave around, to make your technology investments more effective.  This isn’t a technology-led revolution, although it does require some technology.  This is a revolution in relationship, a transformation from clients and customers into partners and participants. It’s a revolution in empowerment, led by highly connected people sharing information of vital importance to them.

How does this work in practice?  The COPD ‘Pathways‘ project in Tasmania points the way toward one set of services, which aim at using connectivity to monitor progress and wellness. Could this be extended to individuals with chronic asthma, diabetes, high blood pressure, or severe arthritis?  If one is connected, rather than separate, if one is in constant communication, rather than touching base for widely-spaced check-ins, then there will be a broad awareness of patient health within a community of carers.

The relationship is no longer one way, pointing the patient only to the health services provider.  It becomes multilateral, multifocal, and multiparticpatory.  This relationship becomes the meeting of two networks: the patient’s network of family, friends and co-afflicted, meeting the health network of doctors and nurses, generalists and specialists, clinicians and therapists.  The meeting of these two continuous always-on networks forms another continuity, another always-on network, focused around the continuity of care.

If we tried to do something like this today, with our present organizational techniques, the health service providers would quickly collapse under the burden of the additional demands on their time and connectivity required to offer such continuity in patient care.  Everything currently points toward the doctor, who is already overworked and impossibly time-poor.  Amplifying the connection burden for the doctor is a recipe for disaster.

We must build upon what works, while restructuring these relationships to reflect the enhanced connectivity of all the parties within the healthcare system.  Instead of amplifying the burden, we must use the platform of connectivity to share the load, to spread it out across many shoulders.

For example, consider the hundreds of thousands of carers looking after Australians with chronic illnesses and disabilities.  These carers are the front line.  They understand the people in their care better than anyone else – better even than the clinicians who treat them.  They know when something isn’t quite right, even though they may not have the language for it.

At the moment Australia’s carers live in a world apart from the various state health care systems, and this means that an important connection between the patient and that system is lacking.  If the carer were connected to the health care system – via a service that might be called CarerConnection – there would be better systemic awareness of the patient, and a much greater chance to catch emerging problems before they require drastic interventions or hospitalizations.

These carers, like the rest of Australia, already have mobiles.  Within a few years, all those mobiles will be ‘smart’, capable of snapping a picture of a growing rash, or a video of someone’s unsteady gait, ready to upload it to anyone prepared to listen.  That’s the difficult part of this equation, because at present the health care system can’t handle inquiries from hundreds of thousands of carers, even if it frees up doctor’s surgeries and hospital beds.

Perhaps we can employ nurses on their way to a gradual retirement – in the years beyond age 65 – to connect with the carers, using them to triage and elevate or reassure as necessary.  In this way Australia empowers its population of carers, creating a better quality of life for those they care for, and moves some of the burden for chronic care out of the health care system.

That kind of innovative thinking – which came from workshops in Bendigo and Ballarat – which shows the real value of connectivity in practice.  But that’s just the beginning.  This type of innovation would apply equally effectively to substance abuse recovery programs or mesothelioma or cystic fibrosis.  Beyond health care, it applies to education and city management as well as health service delivery.

This is good old-fashioned ‘people power’ as practiced in every small town in Australia, where everyone knows everyone else, looks out for everyone else, and is generally aware of everyone else.  What’s new is that the small town is now everywhere, whether in Camperdown or Bendigo or Brunswick, because the close connectivity of the small town has come to us all.

The aging of the Australian population will soon force changes in service delivery.  Some will see this as a clarion call for cutbacks, a ‘shock doctrine‘, rather than an opportunity to re-invent the relationships between service providers and the community.   This slowly unfolding crisis provides our generation’s best chance to transform practices to reflect the new connectivity.

It’s not necessary to go the whole distance overnight.  This is all very new, and examples on how to make connectivity work within healthcare are still thin on the ground.  Experimentation and sharing are the orders of the day.  If each regional area in Victoria started up one experiment – a project like CasConnect – then shared the results of that experiment with the other regions, there’d soon be a virtual laboratory of different sorts of approaches, with the possibility of some big successes, and, equally, the chance of some embarrassing failures.  Yet the rewards greatly outweigh any risks.

If this is all done openly, with patients and their community fully involved and fully informed, even the embarrassments will not sting – very much.

In order to achieve more with less, we must ask more of ourselves, approaching our careers with the knowledge that our roles will be rewritten.  We must also ask more of those who come forward for care.  They grew up in the expectation of one sort of relationship with their health services providers, but they’re going to live their lives in another sort of arrangement, which blurs boundaries and which will feel very different – sometimes, more invasive.  Privacy is important, but to be cared for means to surrender, so we must come to expect that we will negotiate our need for privacy in line with the help we seek.

The magic pudding isn’t really that magic. The recipe calls for a lot of hard work, a healthy dash of risk taking, a sprinkle of experiments, and even a few mistakes.  What comes out of the oven of innovation (to stretch a metaphor beyond its breaking point) will be something that can be served up across Victoria, and perhaps across the nation.  The solution lies in people connected, transformed into people power.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Mothers of Innovation

Introduction:  Olden Days

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store.  I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system.  Prominently displayed at the front of the store, I spied my first Macintosh.  It wasn’t known as a Mac 128K or anything like that.  It was simply Macintosh.  I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse.  Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen.  But how do I get it to do anything? I wondered.  Click.  Nothing.  Click, drag – oh look some of these things changed color!  But now what?  Gah.  This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure.  Graphical computing isn’t intuitive, that’s a bit of a marketing fib.  It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment.  The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010.  Twenty-six years ago, it was a different story.  You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods.  Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers.  (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.)  We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends.  Which I did.  Every weekend.  The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh.  Presented with a perfectly white screen, you dragged the mouse along the display.  The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker.  For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation.  It was, in the vernacular of Boston, ‘wicked’.  I couldn’t stop playing with Hendrix.  I invited friends over, showed them, and they couldn’t stop playing with Hendrix.  Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for.  Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity.  I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity.  When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity.  That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G).  I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us.  So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I:  The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system.  I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death.  (And, if I know how to read a stack trace, that was probably caused by Flash.  Go figure.)  OSX is solid, it’s modestly secure, and it has plenty of eye candy.  My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words.  Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication.  This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET.  Phase one: build the network.  Phase two: connect everything to the network.  Phase three: PROFIT!

That seems to have worked out pretty much according to plan.  Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected.  All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing.  Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom.  Blame our brains.  They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies.  We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it?   Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there.  Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts.  They take chaos and organize it.  Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals.  They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds.  All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention.  We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity.  The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things?  All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them?  Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload?  Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure.  If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser.  Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on.  This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on.  I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else.  I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea.  Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last ‘social message manager’ that anyone will need.  It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging.  But I’m not coy.  I’ll come right out and say it: Plexus is an open-source project, and I need some help here.  That’s a reflection of the fact that we all need some help here.  We’re being clubbed into submission by our connectivity.  I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves.  That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve.  Finally, some time to think.  Some time to be.  Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds.  We’re too busy attending to the next message.  We need to disconnect periodically, and focus on the real.  We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others.  We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward.  It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become.  The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another.  We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent.  My unit was going on the auction block – would I mind moving out?  Immediately?  I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance.  The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird.  There were no signs posted in my building, no indication that either of the units were for rent.  I’d heard nothing from the few neighbors I know well enough to chat with.  They didn’t know either.  Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it.  Why?  Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs.  One of the pleasures of the city is its anonymity.  That’s also one of it’s great dangers.  The two go hand-in-hand.  Yet the world of 2010 does not offer up this kind of anonymity easily.  Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again.  How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to?  Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device.  It is the place where our of the human relationships reside.  (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.)  Yet the mobile is more than just the social crossroads.  It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView.  I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long.  As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be.  The mobile is my networked, location-aware sensor.  It gathers up all of the information I need to ease my path through life.  This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment.  The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack.  It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention.  After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time.  I asked her, “When is your mobile ever more than a few meters away from you?  How much difference would it make?  What do you gain by sticking it underneath your skin?”  I didn’t even bother to mention the danger from all that subcutaneous microwave radiation.  It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be.  Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential.  It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection.  We’re addicted, and the first step to treating that addiction is to admit we have  problem.  But here’s the dilemma: we’re working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers.  And I’m encouraging you to make better crack.  Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it.  It’s an accommodation we will need to make, a give-and-take.  We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there’s more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop.  I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration.  When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place.  This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole.  There’s clearly a need for it, but that’s just the beginning.  This is automated, computational decision making.  It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra.  When I arrived there, at 6 pm, I wondered where to have dinner.  Canberra is not known for its scintillating nightlife – I had no idea where to dine.  I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests?  Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters?  The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves.  All of it comes together in our hands.  The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together.  It becomes hypnotizing and formidable, though it rings less and less.  Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket.  Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live.  When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network.  Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations.  I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming.  That’s the potential that we hold in the palms of our hands.  That’s the world you can bring to life.

III:  Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions.  There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001.  But it never happened.  Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience.  It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years.  It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface.  Yet that alone was not enough.  I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated.  But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months.  That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone.  iPad is not simply a gargantuan iPod Touch.  The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience:  I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece.  I had about 48 hours to do the work.  I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them.  Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting.  Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account.  Once uploaded, I could access those files from my iPad – all thousand or so pages.  Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written.  This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate.  iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper.  That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago.  The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world.  It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone.  The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear.  So stretch your mind further than the idea of the app.  Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it.  That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there.  Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model.  That will come so that Apple can enable FaceTime.  (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.)  Once the iPad can see you, it can respond to you.  It can acknowledge your presence in an authentic manner.  We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in.  We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it.  But that’s not where they’re looking, this is not about keeping you entertained.  The iPad can entertain you, but that’s not its main design focus.  It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures.  At that point it is no longer a mirror; it is an entity on its own.  It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby.  It will become our constant companion, helping us through every situation.  And it will move seamlessly between our devices, from iPad to iPhone to desktop.  But it will begin on iPad.

Because we are just starting out with tablets, anything is possible.  We haven’t established expectations which guide us into a particular way of thinking about the device.  We’ve had mobiles for nearly twenty years, and desktops for thirty.  We understand both well, and with that understanding comes a narrowing of possibilities.  The tablet is the undiscovered country, virgin, green, waiting to be explored.  This is the desktop revolution, all over again.  This is the mobile revolution, all over again.  We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet.  I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer.  I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing.  So think big.  This is the time for it.  In another few years it will be difficult to aim for the stars.  The platform will be carrying too much baggage.  Right now we all get to be rocket scientists.  Right now we get to play, and dream, and make it all real.