Hyperdemocracy

For the past three hundred years, the relationship between the press and the state has been straightforward: the press tries to publish, the state uses its various mechanisms to thwart those efforts.  This has produced a cat-and-mouse steady-state, a balance where selection pressures kept the press tamed and the state – in many circumstances – somewhat accountable to the governed.  There are, as always, exceptions.

In the last few months, the press has become hyperconnected, using that hyperconnectivity to pierce the veil of secrecy which surrounds the state; using the means available to it to hyperdistribute those secrets.  The press has become hyperempowered, an actor unlike anything ever experienced before.

Wikileaks is the press, but not the press as we have known it.  This is the press of the 21st century, the press that comes after we’re all connected.  Suddenly, all of the friendliest computers have become the deadliest weapons, and we are fenced in, encircled by threats – which are also opportunities.

This threat is two sided, Janus-faced.  The state finds its ability to maintain the smooth functioning of power short-circuited by the exposure of its secrets.  That is a fundamental, existential threat.  In the same moment, the press recognizes that its ability to act has been constrained at every point: servers get shut down, domain names fail to resolve, bank accounts freeze.  These are the new selection pressures on both sides, a sudden quickening of culture’s two-step.  And, of course, it does not end there.

The state has now realized the full cost of digitization, the price of bits.  Just as the recording industry learned a decade ago, it will now have to function within an ecology which – like it or not – has an absolutely fluid quality.  Information flow is corrosive to institutions, whether that’s a record label or a state ministry.  To function in a hyperconnected world, states must hyperconnect, but every point of connection becomes a gap through which the state’s power leaks away.

Meanwhile, the press has come up against the ugly reality of its own vulnerability.  It finds itself situated within an entirely commercial ecology, all the way down to the wires used to carry its signals.  If there’s anything the last week has taught us, it’s that the ability of the press to act must never be contingent upon the power of the state, or any organization dependent upon the good graces of the state.

Both sides are trapped, each with a knife to the other’s throat.  Is there a way to back down from this DEFCON 1-like threat level?  The new press can not be wished out of existence.  Even if the Internet disappeared tomorrow, what we have already learned about how to communicate with one another will never be forgotten.  It’s that shared social learning – hypermimesis – which presents the continued existential threat to the state.  The state is now furiously trying to develop a response in kind, with a growing awareness that any response which extends its own connectivity must necessarily drain it of power.

There is already a movement underway within the state to shut down the holes, close the gaps, and carry on as before.  But to the degree the state disconnects, it drifts away from synchronization with the real.  The only tenable possibility is a ‘forward escape’, an embrace of that which seems destined to destroy it.  This new form of state power – ‘hyperdemocracy’ – will be diffuse, decentralized, and ubiquitous: darknet as a model for governance.

In the interregnum, the press must reinvent its technological base as comprehensively as Gutenberg or Berners-Lee.  Just as the legal strangulation of Napster laid the groundwork for Gnutella, every point of failure revealed in the state attack against Wikileaks creates a blueprint for the press which can succeed where it failed.  We need networks that lie outside of and perhaps even in opposition to commercial interest, beyond the reach of the state.  We need resilient Internet services which can not be arbitrarily revoked.  We need a transaction system that is invisible, instantaneous and convertible upon demand.  Our freedom madates it.

Some will argue that these represent the perfect toolkit for terrorism, for lawlessness and anarchy.  Some are willing to sacrifice liberty for security, ending with neither.  Although nostalgic and tempting, this argument will not hold against the tenor of these times.  These systems will be invented and hyperdistributed even if the state attempts to enforce a tighter grip over its networks.  Julian Assange, the most famous man in the world, has become the poster boy, the Che for a networked generation. Script kiddies everywhere now have a role model.  Like it or not, they will create these systems, they will share what they’ve learned, they will build the apparatus that makes the state as we have known it increasingly ineffectual and irrelevant. Nothing can be done about that.  This has already happened.

We face a choice.  This is the fork, in both the old and new senses of the word.  The culture we grew up with has suddenly shown its age, its incapacity, its inflexibility.  That’s scary, because there is nothing yet to replace it.  That job is left to us.  We can see what has broken, and how it should be fixed.  We can build new systems of human relations which depend not on secrecy but on connectivity.  We can share knowledge to develop the blueprint for our hyperconnected, hyperempowered future.  A week ago such an act would have been bootless utopianism.  Now it’s just facing facts.

The Blueprint

With every day, with every passing hour, the power of the state mobilizes against Wikileaks and Julian Assange, its titular leader.  The inner processes of statecraft have never been so completely exposed as they have been in the last week.  The nation state has been revealed as some sort of long-running and unintentionally comic soap opera.  She doesn’t like him; he doesn’t like them; they don’t like any of us!  Oh, and she’s been scouting around for DNA samples and your credit card number.  You know, just in case.

None of it is very pretty, all of it is embarrassing, and the embarrassment extends well beyond the state actors – who are, after all, paid to lie and dissemble, this being one of the primary functions of any government – to the complicit and compliant news media, think tanks and all the other camp followers deeply invested in the preservation of the status quo.  Formerly quiet seas are now roiling, while everyone with any authority everywhere is doing everything they can to close the gaps in the smooth functioning of power.  They want all of this to disappear and be forgotten.  For things to be as if Wikileaks never was.

Meanwhile, the diplomatic cables slowly dribble out, a feed that makes last year’s MP expenses scandal in the UK seem like amateur theatre, an unpracticed warm-up before the main event.  Even the Afghan and Iraq war logs, released by Wikileaks earlier this year, didn’t hold this kind of fascination.  Nor did they attract this kind of upset.  Every politican everywhere – from Barack Obama to Hillary Clinton to Vladimir Putin to Julia Gillard has felt compelled to express their strong and almost visceral anger.  But to what?  Only some diplomatic gossip.

Has Earth become a sort of amplified Facebook, where an in-crowd of Heathers, horrified, suddenly finds its bitchy secrets posted on a public forum?  Is that what we’ve been reduced to?  Or is that what we’ve been like all along?  That could be the source of the anger.  We now know that power politics and statecraft reduce to a few pithy lines referring to how much Berlusconi sleeps in the company of nubile young women and speculations about whether Medvedev really enjoys wearing the Robin costume.

It’s this triviality which has angered those in power.  The mythology of power – that leaders are somehow more substantial, their concerns more elevated and lofty than us mere mortals, who must not question their motives – that mythology has been definitively busted.  This is the final terminus of aristocracy; a process that began on 14 July 1789 came to a conclusive end on 28 November 2010.  The new aristocracies of democracy have been smashed, trundled off to the guillotine of the Internet, and beheaded.

Of course, the state isn’t going to take its own destruction lying down.  Nothing is ever that simple.  And so, over the last week we’ve been able to watch the systematic dismantling of Wikileaks.  First came the condemnation, then, hot on the heels of the shouts of ‘off with his head!’ for ‘traitor’ Julian Assange, came the technical attacks, each one designed to amputate one part of the body of the organization.

First up, that old favorite, the distributed denial of service (DDoS) attack, which involves harnessing tens of thousands of hacked PCs (perhaps yours, or your mom’s, or your daughter’s) to broadcast tens of millions of faux requests for information to Wikileaks’ computers.  This did manage to bring Wikileaks to its knees (surprising for an organization believed to be rather paranoid about security), so Wikileaks moved to a backup server, purchasing computing resources from Amazon, which runs a ‘cloud’ of hundreds of thousands of computers available for rent.  Amazon, paranoid about customer reliability, easily fended off the DDoS attacks, but came under another kind of pressure.  US Senator Joe Lieberman told Amazon to cut Wikileaks off, and within a few hours Amazon had suddenly realized that Wikileaks violated their Terms of Service, kicking them off Amazon’s systems.

You know what Terms of Service are?  They are the too-long agreements you always accept and click through on a Website, or when you install some software, etc.  In the fine print of that agreement any service provider will always be able to find some reason, somewhere, for terminating the service, charging you a fee, or – well, pretty much whatever they like.  It’s the legal cudgel that companies use to have their way with you.  Do you reckon that every other Amazon customer complies with its Terms of Service?  If you do, I have a bridge you might be interested in.

At that point, Assange & Co. could have moved the server anywhere willing to host them – and Switzerland had offered.  But the company that hosts Wikileaks’ DNS record – everyDNS.com – suddenly realized that Wikileaks was in violation of its terms of service, and it too, cut Wikileaks off.  This was a more serious blow.  DNS, or Domain Name Service, is the magic that translates a domain name like markpesce.com or nytimes.com into a number that represents a particular computer on the Internet.  Without someone handling that translation, no one could find wikileaks.org.  You would be able to type the name into your web browser, but that’s as far as you’d get.

So Wikileaks.org went down, but Wikileaks.ch (the Swiss version) came online moments later, and now there are hundreds of other sites which are all mirroring the content on the original Wikileaks site.  It’s a little bit harder to find Wikileaks now – but not terrifically difficult.  Score one for Assange, who – if the news media are to be believed – is just about to be taken into custody by the UK police, serving a Swedish arrest warrant.

Finally, just a few hours ago, the masterstroke.  Wikileaks is financed by contributions made by individuals and organizations.  (Disclosure: I’m almost certain I donated $50 to Wikileaks in 2008.)  These contributions have been handled (principally) by the now-ubiquitous PayPal, the financial services arm of Internet auction giant eBay.  Once again, the fine folks at PayPal had a look at their Terms of Service (stop me if you’ve heard this one before) and – oh, look! those bad awful folks at Wikileaks are in violation of our terms! Let’s cut them off from their money!

Wikileaks has undoubtedly received a lot of contributions over the last few days.  As PayPal never turns funds over immediately, there’s an implication that PayPal is holding onto a considerable sum of Wikileaks’ donations, while that shutdown makes it much more difficult to to ‘pass the hat’ and collect additional funds to keep the operation running.   Checkmate.

A few months ago I wrote about how confused I was by Julian Assange’s actions.  Why would anyone taking on the state so directly become such a public figure?  It made no sense to me.  Now I see the plan.  And it’s awesome.

You see, this is the first time anything like Wikileaks has been attempted.  Yes, there have been leaks prior to this, but never before have hyperdistribution and cryptoanarchism come to the service of the whistleblower.  This is a new thing, and as well thought out as Wikileaks might be, it isn’t perfect.  How could it be?  It’s untried, and untested.  Or was.  Now that contact with the enemy has been made – the state with all its powers – it has become clear where Wikileaks has been found wanting.  Wikileaks needs a distributed network of servers that are too broad and too diffuse to be attacked.  Wikileaks needs an alternative to the Domain Name Service.  And Wikileaks needs a funding mechanism which can not be choked off by the actions of any other actor.

We’ve been here before.  This is 1999, the company is Napster, and the angry party is the recording industry.  It took them a while to strangle the beast, but they did finally manage to choke all the life out of it – for all the good it did them.  Within days after the death of Napster, Gnutella came around, and righted all the wrongs of Napster: decentralized where Napster was centralized; pervasive and increasingly invisible.  Gnutella created the ‘darknet’ for filesharing which has permanently crippled the recording and film industries.  The failure of Napster was the blueprint for Gnutella.

In exactly the same way – note for note – the failures of Wikileaks provide the blueprint for the systems which will follow it, and which will permanently leave the state and its actors neutered.  Assange must know this – a teenage hacker would understand the lesson of Napster.  Assange knows that someone had to get out in front and fail, before others could come along and succeed.  We’re learning now, and to learn means to try and fail and try again.

This failure comes with a high cost.  It’s likely that the Americans will eventually get their hands on Assange – a compliant Australian government has already made it clear that it will do nothing to thwart or even slow that request – and he’ll be charged with espionage, likely convicted, and sent to a US Federal Prison for many, many years.  Assange gets to be the scapegoat, the pinup boy for a new kind of anarchism.  But what he’s done can not be undone; this tear in the body politic will never truly heal.

Everything is different now.  Everything feels more authentic.  We can choose to embrace this authenticity, and use it to construct a new system of relations, one which does not rely on secrets and lies.  A week ago that would have sounded utopian, now it’s just facing facts. I’m hopeful.  For the first time in my life I see the possibility for change on a scale beyond the personal.  Assange has brought out the radical hiding inside me, the one always afraid to show his face.  I think I’m not alone.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

When I’m Sixty-Four

I: No Fate

I started using the World Wide Web in October of 1993.  To say that the Web was primitive and ugly at that early date is to miss the point completely, making fun of a baby just emerged from the womb.  It was as beautiful and full of potential as a new-born child for those who could see past what it was and look toward what it might become.  I’d been an apostle of hypertext for well over a decade before the Web came around, so I was ready.  I knew what it portended.  Even so, the past seventeen years have surpassed my wildest expectations.

I am forty-seven years old; in seventeen years I will be sixty-four.  It is as difficult to predict the Web of 2027 from 2010 as it was to predict the Web of 2010 from 1993.  Too much relies upon the ‘sensitive dependence on initial conditions’.  A teenager programming in a bedroom in Melbourne or Chongquing or Moscow could do something that changes everything.  For example, back in 1993 my friend Kevin Hughes decided to see if he could put an HTML anchor tag – which creates a hyperlink – around an image tag – which displays an image within the web page.  Voila – the Web button was born!  Most of the links we click on today are buttons, and most of us have no idea that the Web button isn’t even inferred in the HTML specification.  Someone had to be inventive, to try an experiment, and – once it succeeded – share the results with the world.  That invention sent the Web into certain directions which led to the Web we have today.  If Kevin hadn’t developed the button, the Web might have remained text-based for much longer, which would have altered our experience and expectations.

Even more interesting are the technologies that get invented, then lay dormant for years, suddenly springing into life.  The wiki – essentially a web page which can be edited in place – was invented at the University of Hawaii in 1995.  Though it found a few modest uses, until in the early years of this decade, when Wikipedia burst upon the scene, people did not comprehend the power of a Web with an ‘edit’ button attached to it.  Today a web page is considered somewhat dysfunctional unless editable by its users.

This happened to my own work.  In 1994 Tony Parisi and I blended my own work in virtual reality with the very first Web technologies to create the Virtual Reality Modeling Language, a 3D companion for HTML.  We had some ideas what it could be used for, and when we offered it up to the world as an early open source software project, others came along with their own, amazing ideas:  3D encyclopedias whose representation reflected the tree of knowledge; animated tools for teaching American Sign Language; a visualization for the New York Stock Exchange which enabled a broker to absorb five thousand times as much information as was possible from a simple text display.  The future seemed rosy; Newsweek magazine dedicated a colourful two-page spread to the wonders of VRML.

But the future rarely arrives when planned, or in the form we expect.  Most PCs of the early Internet era didn’t have the speed required to display 3D computer graphics; they could barely keep up with a mixture of images and text on a web page.  And in the days before broadband, downloading a 3D model could take minutes.  Far longer than anyone cared to wait.  For users who had barely gotten their minds around the 2D web of HTML, asking them to grasp the 3D worlds of VRML was a bridge too far.  Until Toy Story and the Playstation came out in the mid-1990s, most people had no exposure to 3D computer graphics; today they’re commonplace, both in the cinema and in our living rooms.  We know how to use 3D to entertain us.  But 3D is still not a common part of our Web experience.  That is now changing with the advent of WebGL, a new technology which makes it easy to create 3D computer graphics within the Web browser.  It took sixteen years, but finally we’re seeing some great 3D on the Web.

The seeds of the future are with us in the present; it is up to us to water them, tend them, and watch them grow.  One of those seeds, with us in just this moment, is the ability to slap a GPS tracker on anything – a package or a person or a truck – hook it up to some sort of mobile, and instantly be aware of its every movement.  Parents give their children mobiles which relay a constant stream of location data to a website that those parents can use to monitor the child’s current whereabouts.  If this seems a trifle Orwellian, consider the tale told by Intel anthropologist Dr. Genevieve Bell, who interviewed a classroom of South Korean children, all of whom had these special tracking mobiles.  Did these devices make them feel too closely watched, too hemmed in?  Surprisingly, the children pointed to another member of the class, saying, “See that poor kid?  Her parents don’t love her enough to get her a tracking mobile.”  These kids love Big Brother.

Conversely, every attempt to place GPS trackers on the buses of Sydney – so that the public could have a good idea when the next bus will actually arrive at the stop – has been subject to furious rejection by the bus drivers’ union.  This, they claim, will allow the bosses to monitor their every movement.  It is Orwellian.  A bridge too far.  As a pedestrian in Sydney, constantly at the whim of public transport, I have to suffer because Sydney’s bus drivers believe their right to privacy on-the-job extends to the tool of their trade, that is, the bus itself.  I could argue that as a fare-payer, I have every right to know just where that bus is, and how long it will take to arrive.  There is the dilemma: How do we protect people, and make them feel secure in a world where ever more is being tracked?  In Melbourne all the trams are GPS tagged, and anyone can look up the precise location of any tram at any time.  It’s the future, the coming thing, and Melbourne has simply gotten there first.

In 2010 it is relatively expensive and somewhat bulky and power-hungry to track something, but in seventeen years it will be easy and tiny and cheap and probably solar-powered.  We will track everything of importance to us, all the time, and that leaves us with two big questions:  How will we deal with all of this locative data pouring upon us constantly?  And who needs access to this data?  How can I provide access without compromising myself and my privacy?  As more things are tracked more comprehensively, it becomes possible to track something by absence as well as by presence.  The missing item sticks out.  That’s the kind of tool that governments, police and gangsters will all find useful.  Which means we’ll learn the art of hiding in plain sight, of disguising our comings and goings as something else altogether, a kind of magician’s redirection of the audiences’ gaze.

There’s no escaping a future which is continuously recorded, tracked, and monitored.  This is the ‘Database Nation’ presented so chillingly in its more paranoid renderings.  But it’s only frightening if we deny ourselves agency, if we act as though we are merely prey to forces of capital and power far outside of our own control, if we simply surrender ourselves to the death of a thousand transactions.  And if we stumble into this future unconsciously, that’s pretty much what we’ll get.  Others will make the decisions we refused to make for ourselves.  But that’s not the way adults behave.  Adults are characterized by agency: they shape events where possible, and where that isn’t possible, they do their best to maintain some awareness of the events shaping them.

It is possible to resist, to push back against the forces which seek to measure us, monitor us, and ask us to comply.  Sydney’s bus drivers have done this, and as much as I might rue the shadow their decision casts over my own ability to plan my travel, I do not fault their reasons.  We should all consider how we need to push back against forces which seek to intrude and thereby control us.  We could benefit from a bit more disorder, disobedience, and resistance.  The future is unwritten.  Its seeds need not grow.  We can ignore them, or even cut them down when they spring up.  And we can plant other seeds, which enhance our agency, making us more powerful.

II: Close To You

It seems as though humans are solitary, almost lonely creatures.  If lucky, we manage to find a partner to pass through life with, to share and shoulder the burden of being.  We may have a few children, who we care for until they enter university or head off to work, and whom we see sporadically thereafter, just we only occasionally visit our own parents.  This life pattern, known as the ‘nuclear family’, first identified in the middle of the 20th century, feels as though it’s the way things have always been – because it’s the way things have always been for us.  There may be subtle differences here and there – grandparents caring for grandchildren while parents work, or children who refuse to leave the nest even into their 30s – but these exceptions prove the existence of a rule which binds us all to certain socially acceptable behaviors, and sets our range of expectations.

This is not the way things have always been.  This is not the way things were at any point before the beginning of the 20th century.   Prior to that, entire families lived together, grandparents and grandchildren, aunts and uncles and cousins, everyone under one roof, pooling resources and pulling together to keep the family alive.  That life pattern goes well back into history – at least a few thousand years, probably all the way to the beginning of agriculturization.  Before that, we lived within the close bonds of the tribal unit, foraging and hunting and moving continuously through the landscape.  That life pattern goes back countless millions of years, well before the emergence of a recognizable human species.

The tribe has always been large, much larger than the family unit, so it’s not surprising that the Nuclear Era leaves us feeling somewhat lonely, with the suspicion that something’s gone missing, that we’re not quite fulfilled.  We evolved in the close presence of others – and not just a few others.  We need that community in order to know who we are.  We were divorced from that complementary part of ourselves in the race into modernity.  We got lots of kit, but we lost a part of our soul.

This goes a long way to explain why the essential social technology – the mobile – has become such a roaring, overwhelming success.  The mobile reconnects us to the community our ancestors knew intimately and constantly.  Our family, friends and coworkers are no more than a few seconds away, always at hand.  We can look at our call logs and SMS message trails and get a good sense of who we really feel connected to.  If you want to know where someone’s heart is, follow their messages. That sinking feeling you get when you realize you’ve left your mobile at home – or, heavens forbid, misplaced it – is a sensation of amputation.  You feel cut off from the community that could help when you need it, or simply be there to listen.

Just a few years after we all acquired our mobiles, this social technology gained a double in the online world with the emergence of social networks such as Friendster and MySpace.  These social networks provide a digital scaffolding for the relationships we once enjoyed in our tribes.  They are a technology of retribalization, a chance to recover something lost.  We seem to instinctively recognize this, else why would Facebook have grown from 20 to over 500 million members in just three years?  This is what it looks like when people suddenly find themselves with the ability to fulfill a long-term need.  This is not a new thing.  This is a very old thing, a core part of humanity coming to the fore.

At first this all this connecting seems innocuous, little more than old friends becoming reacquainted after a long separation.  Nothing could be further from the truth, because these connections are established for a reason.  Some connections are drawn from the bonds of blood, others from friendship, others from financial interest (your co-workers), still others because you share some common passion, or goal, or vision.  It’s these last few which most interest me, because these are unpredicted, these aren’t simply the recovery of a prehistoric community, a recapitulation of things we already know.  These are connections with a purpose.

But what purpose?

Just in the past two or three years, researchers have been examining social networks – the real ones, not the online version – to understand what role they play in our lives.  The answers have been stunning.  It’s now been demonstrated that obesity and slimming spread through social networks: if you’re overweight it’s more likely your friends are, and if they go on a diet, you’re more likely to do so.  The same thing holds true for smoking and quitting.  Most recently it was shown that divorce spreads through social networks: a married couple with friends who are divorcing stands a greater chance of divorcing themselves.

Both obesity and smoking are public health issues; divorce is both a moral issue and a cultural hot potato.  We all know the divorce rate is high, but we haven’t had any good suggestions for how to bring that rate down – or consensus on whether we should.  But these studies seem to indicate that a tactic of strategic isolation of the divorcing from the married might go some distance to lowering the divorce rate.  In that sense, divorce itself becomes another ‘social disease’, and epidemiologists might be expected to track known cases through the community.
It all sounds a bit weird, doesn’t it?  Yet if someone were to suggest education and incentives to get these same networks to spread anti-smoking behaviors, we’d have the full weight of the state, the health system, and the community behind it.  Someone will suggest just that sometime in the next few years, with a growing awareness of the power of our communities to shape our behavior as individuals.

Yet these are just the obvious features of social networks.  Their power to define your identity and behavior go far beyond this.  Consider: someone can walk into a bank today and steal your identity, taking out a loan in your name, if they present the proper documentation.  How is this possible?  It’s because the points system we use here in Australia – and equivalent algorithms used throughout the world to establish identity – can be fooled.  Stuff the right documents in, and out pop the appropriate approvals.  But shouldn’t I be required to provide the proof of others?  Isn’t my identity contingent upon others willing to attest to it?  This isn’t the way we think of identity, it certainly doesn’t fit into any neat legal category, but it is how identity works in practice.  This is how identity has always worked.  People ‘run away from home’ to establish a new identity precisely because their identities are defined and constrained by those they are connected to, often in opposition to their own desires.

When you walk into the bank to apply for that loan, you need to provide identification; really, you should hand the bank your ‘social graph’ – the enumerated set of your connections – and let the bank judge your identity from that graph.  ASIO and MI5 and the CIA can already analyze your social graph to learn if you’re a terrorist, or a terrorist sympathizer; surely your bank can write a little bit of software which can confirm your identity?  It would help if the bank understood the strength of each of your connections, by analyzing the number of messages that have hopped the gap between you and those connected to you.  From this the bank would know who they should be asking to vouch for you.

All of this sounds complicated, and will probably be more involved than our simple but spoofable systems in use today.  The end result will be a system with much greater resilience, and much harder to fool, because we’re capitalizing on the fact that identity is a function of our community.  And not just identity: talent is also something that is both a function of and a recognized value within a community.  LinkedIn provides a mechanism for individuals to present their social graphs to potential employers; that social graph tells a recruiter more about the individual than any c.v.

The social graph is the foundation for identity; it always has been, but during the last hundred years we fragmented and atomized, and our social graphs began to atrophy.  We have now retrieved them, and because of that we can demonstrate that our value is derived from what others think of us.  This has always been true, but now those others are no Stone Age tribe, but rather represent communities of expertise which may be global, highly specific, and fiercely competitive.  These new communities have so much collected, connected capability they can ignore all of the neat boundaries of an organization, can play outside the silos of the business and government, and do as they please.  A group of well-connected, highly empowered individuals is a force to be reckoned with.  It always has been.

III: Senior Concessions

Last month the “Health Lies in Wealth” report surprised almost no one when it announced that the wealthiest among us live, on average, three years longer than the poorest.  The report identified many cofactors to life expectancy, such as graduating school, owning your own home, and – most surprisingly – the presence of a strong social network.  People who live alone do not thrive.  We know that in our bones.  We understand that ‘no man is an island’, that we actually do need one another to survive, just as we always have.  Only in close connection with others can we receive the support we need to live out the full span of our lives.  This support might help us to maintain our weight, or quit smoking, or stay faithful, or simply remind us to take care of ourselves.  Whatever form it comes in, it has become clear that it is essential.

If it is essential, should we leave it to the ad hoc, ‘natural’ social networks we’ve all be blessed with (and which fail, for some)?  Shouldn’t we apply what we’ve learned about digital social networks directly to our well-being?  This is something that a fourteen year-old wouldn’t think about as they sign up for a Facebook account, but when I’m sixty-four, it will be foremost in my mind.  How can my network keep me healthy?  How can my network assist me in wellness?

At one level this is completely obvious: the tight circle of family and friends, better connected than they ever have been, with better tools both for messaging and monitoring, allow us to ‘look in’ on one another in a way we never have before.  A case in point: my morning medication to control my blood pressure – did I take it?  Sometimes even I can’t remember until I’ve looked at the packaging.  When I get a little older, and a bit more absent-minded, this will become a constant concern.  We’ve already seen the first medicine cabinets which record their openings and closings, the first pill bottles which note when they’ve been used, all information that is monitored, collated, and which can then be distributed through ad-hoc familial or more formal digital social networks.

When the next version of Apple’s iPad comes out early next year, it will have a built-in camera to enable video conferencing.  One of my good friends – who lives on the other side of the continent from his elderly parents – will buy them an iPad, and Velcro it to the wall of their kitchen, so that he can always ‘beam in’ and see what’s going on, and so that, for them, he’s no more than a tap away.  That’s the first and somewhat clumsy version of systems which will continuously monitor the elderly, the frail, and the troubled.  Who’s going to be on the other side of all of those cameras?  Loved ones, mostly, though some of that will be automated, as seems prudent.  This is the inverse of the ‘surveillance culture’ of pervasive CCTV cameras observed by police and counter-terrorism officials; in this world of ‘sousveillance’, everyone is watching everyone else, all the time, and all to the good.  And, unlike Sydney’s bus drivers, we’ll recognize the value of this close monitoring, because it won’t represent an adversarial relationship.  No one will be using this data to wreck our careers or disturb our lives.  We’ll be using it to help one another live longer, and healthier.

What about connections that are slightly less obvious?  We’ve already seen the emergence of ‘Wikimedicine’, where patients band together and share information in an attempt to go beyond what specialists are willing to do in treating a particular condition.  These communities are, quite naturally, full of hoaxes and quacks and misinformation of every conceivable type: individuals fighting for their lives or waging war against chronic illnesses are susceptible to all sorts of tricks and flim-flammery and honest and earnest failures in understanding.  This has happened because individuals enter these networks of hope without their sensible networks of trust.  We have no way to present our social graphs to one another in these environments, to show our bona fides.  If we could (and I’m sure we will soon be able to do so) we could quickly establish who brings real value, insight and wisdom into a conversation.  We would also be able to identify those who seek to confuse, or who are confused, and those who are self-seeking.  That would be clear from their social graphs.  This is a trick eBay learned long ago: if you can see how a buyer or seller has been rated, you have some sense of whether they’ll be reputable.  Such systems are never perfect, but we can expect a continuous improvement in our own ability to detect fraudulent social graphs over the next several years.

While these Wikimedicine networks are interesting and will grow in number, they tend to be exclusive of the medical community, turning their back upon it, in the search for more effective treatments.  That creates a gap which must be filled.  As doctors and nurse practitioners grow more comfortable with a close connectivity with their patients, we’ll see the emergence of a new kind of medical network, one which places the patient at the center, and which radiates out in a few directions: to the patient’s family and social graph; to the patient’s medical team and their professional social graph; to the patient’s community of the co-afflicted.  Each of these communities, effectively isolated from one another at present, will grow closer together in order to improve the welfare of the patient.

As these communities grow closer together, knowledge will pass from one community to another.  The doctor will remain the locus of knowledge and experience, but already some of that power is passing to the nurse practitioner, who acts as a mid-way point between the doctor and the broader, connected community.  The nurse practitioner will need to act as the ‘filter’, ensuring that the various requests and inquiries that come from the community are addressed, but in such a way that the doctor still has time to work.  That’s not a secretarial role, but rather, a partnership of professionals.  Deep knowledge is required to stand between the doctor and the community; as time goes on, as knowledge is transferred to the community, the community empowers itself and assumes some of the functions of both doctor and nurse practitioner.

When I’ve expressed such thoughts to medical professionals, they reject them out of hand.  They contend that there is too much experience, too much knowledge resident in the body of the doctor for that capacity to spread safely.  I doubt this is as true as they might wish it to be.  Yes, medicine is rich and detailed and draws from the physician’s extensive body of experience, but we are building systems which can provide much of that, on demand, to almost anyone.  We won’t be getting rid of the physician – far from it – but the boundaries between the physician and the community will become fuzzier, with the physician remaining the local expert, but not the only one.

This is a new kind of medicine, a new kind of wellness, a system we will not see fully in place until well after I turn sixty-four. Perhaps by the time I’m eighty-four – in 2046 – medicine will have ‘melted’ into a more communal form.  Until then, from a policy point of view – since you are the people who make policy – I’d advise that you tend toward flexibility.  Rigidity is a poor fit for a highly-connected world.  People will tend to ignore rigid structures, creating their own ad-hoc organizations which will compete with and eventually displace yours, if they serve the needs of the patient more effectively.

At the same time, the dilemmas of a highly-monitored world will become more and more prevalent.  We treasure medical privacy, but what we really mean by this is that we want medical data to be freely available to everyone who needs it, while securely protected from anyone who does not.  This is a problem that can only be resolved if the patient has some agency in authorizing access to medical records, and tools that can track that access.  Without those tools, the patient will lose track of who knows what, and it becomes easier for someone who shouldn’t to have a look in.  As our medical records spread through our networks of expertise – the better to treat us – we may lose our fear and feel more willing to surrender our privacy.  We’re a long way away from that world, but we can see how it may eventuate.

As I said at the beginning, it’s difficult to know the shape of the future.  So much depends on the actions we take today, the seeds we choose to water.  I have shown you a few of these seeds: a world where everything is monitored; a human universe grown close with ever-present social networks; a medicine more diffuse and more effective than the one we practice today.   All of these seeds are present in this moment, all of them will affect you in your work, all will drive your decisions, and – should you ignore them – all will force you into sudden policy responses.  The future is connected in a way we could not conceive of a generation ago, in a way that our great-great-grandparents would consider unremarkable.  We’re returning to an old place, but with new tools, and that combination will change everything, whether or not we see it coming, whether or not we want it to come.

Connecting to The Social Network

(Warning, this analysis is essentially a huge spoiler for The Social Network.  You may not want to read this until you’ve seen the film.)

I am a serial entrepreneur.  At various times I started companies to exploit hypertext (this, back in 1986, before most people had even heard of it), home VR entertainment systems (when a VR system cost more than $100K), Web-based interactive 3D computer graphics (before most computers had enough oomph to draw them), and animated webisodic entertainment (half a dozen years before Red vs Blue or Happy Tree Friends burst onto the scene).  All of these ideas were innovative for their time, one was modestly successful, none of them made me rich, though I hope I am in some ways wiser.

Throughout all of those years, I learned that ideas, while important, take a back seat to people.  Business is principally the story of people, not ideas.  While great ideas are not terrifically common, the ability to translate an idea into reality requires more than just a driven creator.  That creator must be able to infect others with their own belief so that the entire creative edifice self-assembles, driven by that belief, with the creator as the burning, electric center of this process.

If this sounds a bit mystical, that’s because creation is essentially a mystical act.  It is an act that requires belief.  It’s an active position, a kind of faith, the evidence of things not seen.  You believe because you choose to believe.

That choice is at the very core of The Social Network.

Although it disguises itself as a courtroom drama – an area that writer Aaron Sorkin knows very well, having mined it for A Few Good MenThe Social Network is at its heart a buddy picture, a tale of a broken bromance than never resolves.  The bromantic partners are, of course, Mark Zuckerberg, the well-known founder of Facebook, and Eduardo Saverin, Zuckerberg’s best friend at Harvard, and the dude Zuck turned to when he had the Big Idea.

The genesis of this Big Idea is the ‘B’ storyline of The Social Network, and the one that Lawrence Lessig spent the better part of a New Republic film review agonizing over.  Lessig, the Intellectual Property lawyer, sees the script as a Hollywood propaganda vehicle in defense intellectual property.  Did Zuckerberg steal the idea for TheFacebook.com from the twin Winklevoss brothers?  The only original thing that the Winklevoss’ offered was the ‘velvet rope’ – TheFaceBook.com or HarvardConnect or ConnectU would be exclusive to Harvard students.  Social networks had been around for a while; six months before Zuckerberg began the late 2003 coding spree that led to the launch of TheFacebook.com, I was happily addicted to the ‘web crack’ of Friendster.com – as were many of my friends.  Nothing new there.  Exclusivity is an attitude, not a product.  Zuckerberg copied nothing.  He simply copped the attitude of the Winklevii.

In the logic of The Social Network, the Winklevoss twins are not friends (Zuckerberg doesn’t get beyond the Bike Room of the Porcellian Club), therefore are owed nothing.  But Zuckerberg immediately runs to Saverin, his One True Friend, to offer him half of everything.  Or, well, nearly half.  Thirty percent, and that for just a thousand dollars in servers.  Such a deal!  All Saverin had to do was believe.  What follows in the next 40 minutes is the essential bromantic core of the film, which parallels more startup stories than I can count: two people who deeply believe in one another’s vision, working day and night to bring it into being.  In this case, Zuckerberg wrote the code, while Saverin – well, he just believed in Zuckerberg.  Zuckerberg needed someone to believe in him, someone to supply him with the faith that he could translate into raw creative energy.

For a while this dynamo cranks along, but we continually see Saverin being pulled in a different direction – exemplified by the Phoenix Club sliding tempting notes beneath his door.  Saverin’s embrace of the material, away from the pure and Platonic realm of code and ideas, can only be seen as backsliding by Zuckerberg, who feverishly focuses on the act of creation, ignoring everything else.  Only when the two men receive side-by-side blowjobs in the bathroom stalls of a Cambridge bar do you sense the bond rekindled; like sailors on shore leave, their conquests are meaningful only when shared.

From this point on, The Social Network charts a descent into confusion and toward the inevitable betrayal which forms the pivot of the film.  Saverin wants to ‘wreck the party’ by introducing advertising into TheFacebook.com (a business strategy which currently earns Facebook in excess of a billion dollars a year), and drags Zuckerberg to meeting after meeting with the New York agencies, whence it becomes clear that Zuckerberg isn’t interested – isn’t even tolerant of the idea – and that Saverin just Doesn’t Get It.  While Saverin sees the potential of TheFacebook.com, he doesn’t believe, doesn’t understand how what Mark has done Will Change Everything.

There is one final reprieve: Saverin cuts a check to rent a house in Palo Alto for Zuckerberg and his interns, a final act of faith that reaffirms his connection to Mark, to the project, to the Big Idea.  This is the setup for the Judas Kiss: within a few months, Saverin withdraws the funds, essentially saying ‘I have lost faith.’  But Zuckerberg has found others who will believe in him, secured a half-million dollars in angel funding, and so discards the worthless, unfaithful Saverin.

If Saverin had stayed true, had gone to California and worked closely with Zuckerberg, this would be a different story, a story about Facebook’s co-founders, and how together they overcame the odds to launch the most successful enterprise of the 21st century.  This is not that story.  This is a story of bromance spurned, and how that inevitably ended up in the courts.  Only when people fail to connect (a recurring theme in The Social Network) do they turn to lawyers.  Zuckerberg was always there, anxious for Saverin to connect.  Saverin was always looking elsewhere for his opportunities.  That’s the tragedy of the story, a story which may not be true in all facts, but which speaks volumes of human truth.

And so a film about entrepreneurs, ideas, and code, a chronicle of theft and betrayal and backstabbing all fades away to reveal a much older tale, of loneliness and faith and brotherhood and heartbreak.  We’re wired together, but we’re still exactly the same, punching that refresh button, hoping our status will change.

Mothers of Innovation

Introduction:  Olden Days

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store.  I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system.  Prominently displayed at the front of the store, I spied my first Macintosh.  It wasn’t known as a Mac 128K or anything like that.  It was simply Macintosh.  I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse.  Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen.  But how do I get it to do anything? I wondered.  Click.  Nothing.  Click, drag – oh look some of these things changed color!  But now what?  Gah.  This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure.  Graphical computing isn’t intuitive, that’s a bit of a marketing fib.  It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment.  The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010.  Twenty-six years ago, it was a different story.  You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods.  Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers.  (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.)  We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends.  Which I did.  Every weekend.  The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh.  Presented with a perfectly white screen, you dragged the mouse along the display.  The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker.  For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation.  It was, in the vernacular of Boston, ‘wicked’.  I couldn’t stop playing with Hendrix.  I invited friends over, showed them, and they couldn’t stop playing with Hendrix.  Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for.  Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity.  I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity.  When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity.  That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G).  I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us.  So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I:  The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system.  I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death.  (And, if I know how to read a stack trace, that was probably caused by Flash.  Go figure.)  OSX is solid, it’s modestly secure, and it has plenty of eye candy.  My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words.  Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication.  This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET.  Phase one: build the network.  Phase two: connect everything to the network.  Phase three: PROFIT!

That seems to have worked out pretty much according to plan.  Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected.  All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing.  Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom.  Blame our brains.  They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies.  We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it?   Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there.  Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts.  They take chaos and organize it.  Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals.  They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds.  All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention.  We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity.  The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things?  All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them?  Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload?  Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure.  If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser.  Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on.  This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on.  I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else.  I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea.  Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last 'social message manager' that anyone will need.  It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging.  But I’m not coy.  I’ll come right out and say it: Plexus is an open-source project, and I need some help here.  That’s a reflection of the fact that we all need some help here.  We’re being clubbed into submission by our connectivity.  I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves.  That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve.  Finally, some time to think.  Some time to be.  Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds.  We’re too busy attending to the next message.  We need to disconnect periodically, and focus on the real.  We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others.  We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward.  It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become.  The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another.  We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent.  My unit was going on the auction block – would I mind moving out?  Immediately?  I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance.  The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird.  There were no signs posted in my building, no indication that either of the units were for rent.  I’d heard nothing from the few neighbors I know well enough to chat with.  They didn’t know either.  Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it.  Why?  Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs.  One of the pleasures of the city is its anonymity.  That’s also one of it’s great dangers.  The two go hand-in-hand.  Yet the world of 2010 does not offer up this kind of anonymity easily.  Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again.  How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to?  Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device.  It is the place where our of the human relationships reside.  (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.)  Yet the mobile is more than just the social crossroads.  It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView.  I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long.  As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be.  The mobile is my networked, location-aware sensor.  It gathers up all of the information I need to ease my path through life.  This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment.  The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack.  It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention.  After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time.  I asked her, “When is your mobile ever more than a few meters away from you?  How much difference would it make?  What do you gain by sticking it underneath your skin?”  I didn’t even bother to mention the danger from all that subcutaneous microwave radiation.  It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be.  Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential.  It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection.  We’re addicted, and the first step to treating that addiction is to admit we have  problem.  But here’s the dilemma: we're working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers.  And I’m encouraging you to make better crack.  Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it.  It’s an accommodation we will need to make, a give-and-take.  We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there's more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop.  I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration.  When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place.  This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole.  There’s clearly a need for it, but that’s just the beginning.  This is automated, computational decision making.  It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra.  When I arrived there, at 6 pm, I wondered where to have dinner.  Canberra is not known for its scintillating nightlife – I had no idea where to dine.  I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests?  Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters?  The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves.  All of it comes together in our hands.  The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together.  It becomes hypnotizing and formidable, though it rings less and less.  Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket.  Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live.  When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network.  Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations.  I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming.  That’s the potential that we hold in the palms of our hands.  That’s the world you can bring to life.

III:  Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions.  There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001.  But it never happened.  Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience.  It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years.  It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface.  Yet that alone was not enough.  I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated.  But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months.  That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone.  iPad is not simply a gargantuan iPod Touch.  The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience:  I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece.  I had about 48 hours to do the work.  I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them.  Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting.  Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account.  Once uploaded, I could access those files from my iPad – all thousand or so pages.  Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written.  This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate.  iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper.  That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago.  The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world.  It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone.  The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear.  So stretch your mind further than the idea of the app.  Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it.  That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there.  Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model.  That will come so that Apple can enable FaceTime.  (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.)  Once the iPad can see you, it can respond to you.  It can acknowledge your presence in an authentic manner.  We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in.  We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it.  But that’s not where they’re looking, this is not about keeping you entertained.  The iPad can entertain you, but that’s not its main design focus.  It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures.  At that point it is no longer a mirror; it is an entity on its own.  It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby.  It will become our constant companion, helping us through every situation.  And it will move seamlessly between our devices, from iPad to iPhone to desktop.  But it will begin on iPad.

Because we are just starting out with tablets, anything is possible.  We haven’t established expectations which guide us into a particular way of thinking about the device.  We’ve had mobiles for nearly twenty years, and desktops for thirty.  We understand both well, and with that understanding comes a narrowing of possibilities.  The tablet is the undiscovered country, virgin, green, waiting to be explored.  This is the desktop revolution, all over again.  This is the mobile revolution, all over again.  We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet.  I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer.  I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing.  So think big.  This is the time for it.  In another few years it will be difficult to aim for the stars.  The platform will be carrying too much baggage.  Right now we all get to be rocket scientists.  Right now we get to play, and dream, and make it all real.

Make War, then Love

At the close of the first decade of the 21st century, we find ourselves continuously connecting to one another.  This isn’t a new thing, although it may feel new.  The kit has changed – that much is obvious – but who we are has not.  Only from an understanding of who we are that we can understand the future we are hurtling toward.  Connect, connect, connect.  But why?  Why are we so driven?

To explain this – and reveal that who we are now is precisely who we have always been, I will tell you two stories.  They’re interrelated – one leads seamlessly into the other.  I’m not going to say that these stories are the God’s honest truth.  They are, as Rudyard Kipling put it, ‘just-so stories’.  If they aren’t true, the describe an arrangement of facts so believable that they could very well be true.  There is scientific evidence to support both of these stories, but neither is considered scientific canon.   So, take everything with a grain of salt; these are more fables than theories, but we have always used fables to help us illuminate the essence of our nature.

For our first story, we need to go back a long, long time.  Before the settlement of Australia – by anyone.  Before Homo Sapiens, before Australopithecus, before we broke away from the chimpanzees, five million years ago, just after we broke away from the gorillas, Ten million years ago.  How much do we know about this common ancestor, which scientists call Pierolapithecus?  Not very much.  A few bits of skeletons discovered in Spain eight years ago.  If you squint and imagine some sort of mash-up of the characteristics of humans, chimpanzees and gorillas, you might be able to get a glimmer of what they looked like.  Smaller than us, certainly, and not upright – that comes along much later.  But one thing we do know, without any evidence from skeletons: Pierolapithecus was a social animal.  How do we know this?  Each of its three descendent species – humans, chips and bonobos – are all highly social animals.  We don’t do well on our own.  In fact, on our own we tend to make a tasty meal for some sort of tiger or lion or other cat.  Together, well, that’s another matter.

Which brings us to the first ‘just-so’ story.  Imagine a warm late afternoon, hanging out in the trees in Africa’s Rift Valley.  Just you and your mates – probably ten or twenty of them.  You’re all males; the females are elsewhere, doing female-type things, which we’ll discuss presently.  At a signal from the ‘alpha male’, all of you fall into line, drop out of the trees, and begin a trek that takes you throughout the little bit of land you call your own – with your own trees and plants and bugs that keep you well fed – and you go all the way to the edge of your territory, to the border of the territory of a neighboring troupe of Pierolapithecus.  That troupe – about the same size as your own – is dozing in the heat of the afternoon, all over the place, but basically within eyeshot of one another.

Suddenly – and silently – you all cross the border.  You fan out, still silent, looking for the adolescent males in this troupe.  When you find them, you kill them.  As for the rest, you scare them off with your screams and your charges, and, at the end, they’ve lost some of their own territory – and trees and plants and delicious grubs – while you’ve got just a little bit more.  And you return, triumphant, with the bodies you’ve acquired, which you eat, with your troupe, in a victory dinner.

This all sounds horrid and nasty and mean and just not criket.  That it is.  It’s war.  How do we know that ‘war’ stretches this far back into our past?  Just last month a paper published in Current Biology and reported in THE ECONOMIST described how primatologists had seen just this behavior among chimpanzees in their natural habitats in the African rain forests.  The scene I just described isn’t ten million years old, or even ten thousand, but current.  Chimpanzees wage war.  And this kind of warfare is exactly what was commonplace in New Guinea and the upper reaches of Amazonia until relatively recently – certainly within the span of my own lifetime.  War is a behavior common to both chimpanzees and humans – so why wouldn’t it be something we inherited from our common ancestor?

War.  What’s it good for?  If you win your tiny Pierolapithecine war for a tiny bit more territory, you’ll gain all of the resources in that territory.  Which means your troupe will be that much better fed.  You’ll have stronger immune systems when you get sick, you’ll have healthier children.  And you’ll have more children.  As you acquire more resources, more of your genes will get passed along, down the generations.  Which makes you even stronger, and better able to wage your little wars.  If you’re good at war, natural selection will shine upon you.

What makes you good at war?  That’s the real question here.  You’re good at war if you and your troupe – your mates – can function effectively as a unit.  You have to be able to coordinate your activities to attack – or defend – territory.  We know that language skills don’t go back ten million years, so you’ve got to do this the old fashioned way, with gestures and grunts and the ability to get into the heads of your mates.  That’s the key skill; if you can get into your mates’ heads, you can think as a group.  The better you can do that, the better you will do in war.  The better you do in war, the more offspring you’ll have, so that skill, that ability to get into each others’ heads gets reinforced by natural selection, and becomes, over time, evolution.  The generations pass, and you get better and better at knowing what your mates are thinking.

This is the beginning of the social revolution.  All the way back here, before we looked anything like human, we grasped the heart of the matter: we must know one another to survive.  If we want to succeed, we must know each other well.  There are limits to this knowing, particularly with the small brain of Pierolapithecus.  Knowing someone well takes a lot of brain capacity, and soon that fills up.  When it does, when you can’t know everyone around you intimately.  When that happens your troupe will grow increasingly argumentative, confrontational, and eventually will break into two independent troupes.  All because of a communication breakdown.

There’s strength in numbers; if I can manage a troupe of thirty while all you can manage is twenty, I’ll defeat you in war.  So there’s pressure, year after year, to grow the troupe, and, quite literally, to stuff more mates into the space between your ears.  For a long time that doesn’t lead anywhere; then there’s a baby born with just a small genetic difference, one which allows just a bit more brain capacity, so that they can handle two or three or four more mates into its head, which makes a big difference.  Such a big difference that these genes get passed along very rapidly, and soon everyone can hold a few more mates inside their heads.  But that capability comes with a price.  Those Pierolapithecines have slightly bigger brains, and slightly bigger heads.  They need to eat more to keep those bigger brains well-fed.  And those big heads would soon prove very problematic.

This is where we cross over, from our first story, into our second.  This is where we leave the world of men behind, and enter the world of women, who have been here, all along, giving birth and gathering food and raising children and mourning the dead lost to wars, as they still do today.  As they have done for ten million years.  But somewhere in the past few million years, something changed for women, something perfectly natural became utterly dangerous.  All because of our drive to socialize.

Human birth is a very singular thing in the animal world.  Among the primates, human babies are the only ones born facing downward and away from the mother.  They’re also the only ones who seriously threaten the lives of their mothers as they come down the birth canal.  That’s because our heads are big.  Very big.  Freakishly big.  So big that one of the very recent evolutionary adaptations in Homo Sapiens is a pelvic gap in women that creates a larger birth canal, at the expense of their ability to walk.  Women walk differently from men – much less efficiently – because they give birth to such large-brained children.

There’s two notable side-effects of this big-brained-ness.  The first is well-known: women used to regularly die in childbirth.  Until the first years of the 20th century, about one in one hundred pregnancies ended with the death of the mother.  That’s an extraordinarily high rate, particularly given that a women might give birth to seven or eight children over their lifetime.  Now that we have survivable caesarian sections and all sorts of other medical interventions, death in childbirth is much rarer – perhaps 1 in 10,000 births.  Nowhere else among the mammals can you find this kind of danger surrounding the delivery of offspring.  This is the real high price we pay for being big-brained: we very nearly kill our mothers.

The second side-effect is less well-known, but so pervasive we simply accept it as a part of reality: humans need other humans to assist in childbirth.  This isn’t true for any other mammal species – or any other species, period.  But there are very few (one or two) examples of cultures where women give childbirth by themselves.  Until the 20th century medicalization of pregnancy and childbirth, this was ‘women’s work’, and a thriving culture of midwives managed the hard work of delivery.  (The image of the chain-smoking father, waiting outside the maternity ward for news of his newborn child is far older than the 20th century.)

For at least a few hundred thousand years – and probably a great deal longer than that – the act of childbirth has been intensely social.  Women come together to help their sisters, cousins, and daughters pass through the dangers and into motherhood.  If you can’t rally your sisters together when you need them, childbirth will be a lonely and possibly lethal experience.  So this is what it means to be human: we entered the world because of the social capabilities of our mothers.  Women who had strong social capabilities, who could bring her sisters to her aid, would have an easier time in childbirth, and would be more likely to live through childbirth, as would their children.

After the child has been born, mothers need even more help from their female peers; in the first few hours, when the mother is weak, other women must provide food and shelter.  As that child grows, the mother will periodically need help with childcare, particularly if she’s just been delivered of another child.  Mothers who can use their social capabilities to deliver these resources will thrive.  Their children will thrive.  This means that these capabilities tended to be passed down, through the generations.  Just as men had their social skills honed by generations upon generations of warfare, women had their social skills sharpened by generations upon generations of childbirth and child raising.

All of this sounds very much as though it’s Not Politically Correct.  But our liberation from our biologically determined sex roles is a very recent thing.  Men raise children while women go to war.  Yet behind this lies hundreds of thousands of generations of our ancestors who did use these skills along gender-specific lines.  That’s left a mark; men tend to favor coordination in groups – whether that’s a war or a footy match – while women tend to concentrate on building and maintaining a closely-linked web of social connections. Women seem to have a far greater sensitivity to these social connections than men do, but men can work together in a team – to slaughter the opponent (on the battlefield or the pitch).

The prefrontal cortex – freakishly large in human beings when compared to chimpanzees – seems to be where the magic happens, where we keep these models of one another.  Socialization has limits, because our brains can’t effectively grow much bigger.  They already nearly kill our mothers, they consume about 25% of the food we eat, and they’re not even done growing until five years after we’re born – leaving us defenseless and helpless far longer than any other mammals.  That’s another price we pay for being so social.

But we’re maxed out.  We’ve reached the point of diminishing returns.  If our heads get any bigger, there won’t be any mothers left living to raise us.  So here we are.  An estimate conducted nearly 20 years ago pegs the number of people who can fit into your head at roughly 148, plus or minus a few.  That’s not very many.  But for countless thousands of years, that was as big as a tribe or a village ever grew.  That was the number of people you could know well, and that set the upper boundary on human sociability.

And then, ten thousand years ago, the comfortable steady-state of human development blew apart.  Two things happened nearly simultaneously; we learned to plant crops, which created larger food supplies, which meant families could raise more children.  We also began to live together in communities much larger than the tribe or village.  The first cities – like Jericho – date from around that time, cities with thousands of people in them.

This is where we cross a gap in human culture, a real line that separates that-which-has-come-before to that-which-comes-after.  Everyone who has moved from a small town or village to the big city knows what it’s like to cross that line.  People have been crossing that line for a hundred centuries.  On one side of the line people are connected by bonds that are biological, ancient and customary – you do things because they’ve always been done that way.  On the other side, people are bound by bonds that are cultural, modern, and legal.  When we can’t know everyone around us, we need laws to protect us, a culture to guide us, and all of this is very new.   Still. Ten thousand years of laws and culture, next to almost two hundred thousand years of custom – and that’s just Homo Sapiens.  Custom extends back, probably all the way to Pierolapithecus.

We wage a constant war within ourselves.  Our oldest parts want to be clannish, insular, and intensely xenophobic.  That’s what we’re adapted to.  That’s what natural selection fitted us for.  The newest parts of us realize real benefits from accumulations of humanity to big to get our heads around.  The division of labor associated with cities allows for intensive human productivity, hence larger and more successful human populations.  The city is the real hub of human progress; more than any technology, it is our ability to congregate together in vast numbers that has propelled us into modernity.

There’s an intense contradiction here: we got to the point where we were able to build cities because we were so socially successful, but cities thwarted that essential sociability.  It’s as though we went as far as we could, in our own heads, then leapt outside of them, into cities, and left our heads behind.  Our cities are anonymous places, and consequently fraught with dangers.

It’s a danger we seem prepared to accept.  In 2008 the UN reported that, for the first time in human history, over half of humanity lived in cities.  Half of us had crossed the gap between the social world in our heads and the anonymous and atomized worlds of Mumbai and Chongquing and Mexico City and Cairo and Saõ Paulo.  But just in this same moment, at very nearly the same time that half of us resided in cities, half of us also had mobiles.  Well more than half of us do now.  In the anonymity of the world’s cities, we stare down into our screens, and find within them a connection we had almost forgotten.  It touches something so ancient – and so long ignored – that the mobile now contends with the real world as the defining axis of social orientation.

People are often too busy responding to messages to focus on those in their immediate presence.  It seems ridiculous, thoughtless and pointless, but the device has opened a passage which allows us to retrieve this oldest part of ourselves, and we’re reluctant to let that go.

Which brings us to the present moment.

Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.