The Connected City

I

During my first visit to Sydney, in 1997, I made arrangements to catch up with some friends living in Drummoyne.  I was staying at the Novotel Darling Harbour, so we arranged to meet in front of the IMAX theatre before heading off to drinks and dinner.  I arrived at the appointed time, as did a few of my friends.  We waited a bit more, but saw no sign of the missing members of our party.  What to do?  Should we wait there – for goodness knows how long – or simply go on without them?

As I debated our options – neither particularly palatable – one of my friends took a mobile out of his pocket, dialed our missing friends, and told them to meet us at an Oxford Street pub.  Crisis resolved.

Nothing about this incident seems at all unusual today – except for my reaction to the dilemma of the missing friends.  When someone’s not where they should be, where they said they would be, we simply ring them.  It’s automatic.

In Los Angeles, where I lived at the time, mobile ownership rates had barely cracked twenty percent.  America was slow on the uptake to mobiles; by the time of my trip, Australia had already passed fifty percent.  When half of the population can be reached instantaneously and continuously, people begin to behave differently.  Our social patterns change.  My Sydneysider friends had crossed a conceptual divide into hyperconnectivity, while I was mired in an old, discrete and disconnected conception of human relationships.

We rarely recall how different things were before everyone carried a mobile.  The mobile has become such an essential part of our kit that on those rare occasions when we leave it at home or lose track of it, we feel a constant tug, like the phantom pain of a missing limb.  Although we are loath to admit it, we need our mobiles to bring order to our lives.

We can take comfort in the fact that all of us feel this way.  Mobile subscription rates in Australia are greater than one hundred and twenty percent – more than one mobile per person, and one of the highest rates in the world.  We have voted with our feet, with our wallets and with our attention.  The default social posture in Sydney – and London and Tokyo and New York – is face down, absorbed in the mobile.  We stare at it, toy with it, play on it, but more than anything else, we reach through it to others, whether via voice calls, SMS, Facebook, Twitter, Foursquare or any of an constantly-increasing number of ways.

The mobile takes the vast, anonymous and unknowable City, and makes it pocket-sized, friendly and personal.  If you ever run into a spot of bother, you can bring resources to hand – family, friends, colleagues, even professional fixers like lawyers and doctors – with the press of ten digits.  We give mobiles to our children and parents so they can call us – and so we can track them.  The mobile is the always-on lifeline, a different kind of 000, for a different class of needs.

Yet these connections needn’t follow the well-trodden paths of family-friends-neighbors-colleagues.  Because everyone is connected, we can connect to anyone we wish.  We can ignore protocol and reach directly into an organization, or between silos, or from bottom to top, without obeying any of the niceties described on org charts or contact sheets.  People might choose to connect in an orderly fashion – when it suits them.  Otherwise, they will connect to their greatest advantage, whether or not that suits your purposes, protocols, or needs.  When people need a lifeline, they will find it, and once they’ve found it, they will share it with others.

How does the City connect to its residents?  Now that everyone in the City – residents and employees and administrators and directors – are hyperconnected, how should the City structure its access policies?  Is the City a solid wall with a single door?  Connection is about relationship, and relationships grow from a continuity of interactions.  Each time a resident connects to the City, is that a one-off, an event with no prior memory and no future impact?

It is now possible to give each resident of the City their own, custom phone number which they could use to contact the City, a number which would encompass their history with the city.  If they use an unblocked mobile, they already provide the City with a unique number.  Could this be the cornerstone of a deeper and more consistent connection with the City?

II

Connecting is an end in itself – smoothing our social interactions, clearing the barriers to commerce and community – but connection also provides a platform for new kinds of activities.  Connectivity is like mains power: once everywhere, it becomes possible to have a world where people own refrigerators and televisions.

When people connect, their first, immediate and natural response is to share.  People share what interests them with people they believe share those interests.  In early days that sharing can feel very unfocused.  We all know relatives or friends who have gone online and suddenly started to forward us every bad joke, cute kitten or chain letter that comes their way.  (Perhaps we did these things too.)  Someone eventually tells the overeager sharer to think before they share.  They learn the etiquette of sharing.  Life gets easier – and more interesting – for everyone.

Once we have learned who wants to know what, we have integrated ourselves into a very powerful network for the dissemination of knowledge.  In the 21st century, news comes and finds us.  If it’s important to us, the things we need to know will filter their way through our connections, shared from person to person, delivered via multiple connections.  Our process of learning about the world has become multifocal; some of it comes from what we see and those we meet, some from what we read or watch, and the rest from those we connect with.

The connected world, with its dense networks, has become an incredibly rapid platform for the distribution of any bit of knowledge – honest truth, rumor, and outright lies.  Anything, however trivial, finds its way to us, if we consider it important.   Hyperconnectivity provides a platform for a breadth of ‘situational awareness’ beyond even the wildest imaginings of MI6 or ASIO.

In a practical sense, sharing means every resident of the City can now possess detailed awareness of the City.  This condition develops naturally and automatically simply by training one’s attention on the City.  The more one looks, the more one sees how to connect to those sharing matters of importance about the City.

This means the City is no longer the authoritative resource about itself.  Networks of individuals, sharing information relevant to them, have become the channel through which information comes to residents.  This includes information supplied by the City, as one element among many – but this information may be recontextualized, edited, curated or twisted to suit the aims of those doing the sharing.

This leads to a great deal of confusion: what happens when official and shared sources of information differ?  In general, individuals tend to trust their networks, granting them greater authority than statutory authorities.  (This is why rumors are so hard to defeat.)  Multiple, reinforcing sources of information offer the City a counterbalance to the persuasive power of the network.  These interconnected sources constitute a network in themselves, and as people connect to the network of the City, this authoritative information will be shared widely through their networks.

The City needs to evaluate all of the information it provides to its residents as shared resources.  Can they be divided, edited, and mashed-up?  Can these resources be taken out of context?  The more useful the City can make its information – more than just words on a page, or figures, or the static image of a floor plan –  the more likely it will be shared.  How can one resident share a City resource with another resident?  If you make sharing easy, it is more likely your own resources will be shared.  If you make sharing difficult, residents will create their own shared resources which may not be as accurate as those offered by the City.  On the other hand, if your resources are freely available but inaccurate, residents will create their own.

Sharing is not a one-way street.  Just as the City offers up its resources to its residents, the City should be connected to these sharing communities, ready to recognize and amplify networks that share useful information.  The City has the advantage of a ‘bully pulpit’: when the City promotes something, it achieves immediate visibility.  Furthermore, when the City recognizes a shared resource, it relieves the City of the burden of providing that resource to its residents.  Although connecting to the residents of the City is not free – time and labour are required – that cost is recovered in savings as residents share resources with one another.  The City need not be a passive actor in such a situation; the City can sponsor competitions or promotions, setting its focus on specific areas it wants residents to take up for themselves.

III

We begin by sharing everything, but as that becomes noisy (and boring), we focus on sharing those things which interest us most.  We forge bonds with others interested in the same things.  These networks of sharing provide an opportunity for anyone to involve themselves fully within any domain deemed important – or at least interesting.  The sharing network becomes a classroom of sorts, where anyone expert in any area, however peculiar, becomes recognized, promoted, and well-connected.  If you know something that others want to know, they will find you.

By sharing what we know, we advertise our expertise.  It follows us where ever we go.  In addition to everything else, we are each a unique set of knowledge, experience and capabilities which, in the right situation, proves uniquely valuable.  Because this information is mostly hidden from view, it is impossible for us to look at one another and see the depth that each of us carries within us.

Every time we share, we reveal the secret expert within ourselves.  Because we constantly share ourselves with our friends, family and co-workers, they come to rely on what we know.  But what of our neighbors, our co-residents in the City?  We walk the City’s streets with little sense of the expertise that surrounds us.

Before hyperconnectivity, it was difficult to share expertise.  You could reach a few people – those closest to you – but unless your skills were particularly renowned or valuable, that’s where it stopped.  For good or ill, our experience and knowledge now  extend far beyond the circle of those familiar to us.

With its millions of residents, the City represents a pool of knowledge and experience beyond compare, which, until this moment, lay tantalizingly beyond reach.  Now that we can come together around what we know – or want to learn – we find another type of community emerging, a community driven by expertise.  Some of these communities are global and diffuse: just a few people here, a few more there.  Other communities are local and dense, because they organize themselves around the physical communities where they live.  Every city is now becoming such a community of knowledge, with the most hyperconnected cities – like San Francisco, Tokyo, and Sydney – leading the way.

The resident of the connected city shares their expertise about the city.  This sharing brings them into community with other residents who also share, or want to benefit from that expertise.  Residents recognize that it is possible to learn as much as they need to know, simply by focusing on those who already know what they need to learn.

Seen this way, the City is a knowledge network composed of its residents.  This network emerges naturally from the sharing activities of those residents, and necessarily incorporates the City as an element in that network.  Residents refer to one another’s expertise in order to make their way in the City, and much of this refers back to the City itself.

How does this City situate itself within these networks of knowledge and expertise?  How can the City take these hidden reservoirs of knowledge and bring them to the surface?  These sorts of tasks are commonplace on digital social networks like Facebook and LinkedIn, but both emphasise the global reach of cyberspace, not the restricted terrain of a suburb.  How can I learn who in my neighborhood has worked with Sydney’s development authorities, so I can get some advice on my own application?  How can I share with others what I have learned through a development application process?

This is the idea at the core of the connected city.  We have connected but remain in darkness, blind to one another.  As the lights come up, we immediately see who knows what.  We ourselves are illuminated by what we know.  As we transition from sharing into learning, we gain the knowledge of the brightest, and the expertise of the most experienced.  We don’t even need to go looking: because it is important, this knowledge comes and finds us.

Long before 2030, everyone in the City will have the full advantage of the knowledge and experience of every other resident of the City.  The City will be nurturing residents who are all as smart and capable as the smartest and most capable among them.  When residents put that knowledge to work, they will redefine the City.

IV

For as long as there have been cities, people have quit their villages and migrated to them.  Two hundred years ago, peasants headed into the great cities of London and Manchester, walking into a hellhole of disease and misery, knowing their chances for a good life were measurably better.  They learned this from their brothers, sisters and cousins who had made the move – and the statistics bear this out.  As dangerous and dirty as London might have been, life on the farm was worse.  With our understanding of sanitation and public health, this is even more true today:  even if you end up in a slum in Mumbai, Lagos or Rio, you and your children will live lives filled with opportunities not available back in the village.

As of 2008, fifty percent humanity lived in cities — a revolution ten thousand years in the marking, yet only half complete.  That migration has accompanied the greatest rise in human lifespan since the birth of our species.  We thrive in cities.  We are meant to be urban animals.

The transition from village to city is a move across both space and time, a traumatic leap across an abyss.  The headspace of the city is very, very different from the village.  In the village, everyone knows you.  In the city you are anonymous.  In the village you are ruled by custom, in the city, governed by law.  You arrive in the city knowing none of its ways, thriving only if you master them.

When my great-grandparents left their Sicilian villages for the industrial city of Boston, Massachusetts, they knew of other relatives who had undertaken the same journey, brothers and sisters who had made their own way in America, and who would be their safe haven when they arrived.  Family and friends have always helped new arrivals get settled in the big city.  This is the reason for the ethnic communities and ghettos we associate with immigration.  As the largest city in a nation with an active and aggressive immigration policy, Sydney has scores of these communities.  It has always been this way, everywhere, in every city, because the immigrant community is a network of knowledge that connects to new arrivals in order to give them a leg up.

Something similar happened to me eight years ago, when I moved to Sydney from Los Angeles.  I knew a few people, who became my entry point into the community: my first job, my first flat, and my first mobile all manifested through the auspices of these friends.  They shared what they knew in order to propel me into success.

These immigrant knowledge networks have always existed, informally.  The most successful immigrants inevitably have strong networks of knowledge backing them up – people, more than facts.  It’s not what you know, but who, because who you know is what you know.  The more you know, the more effective you can be.  An immigrant learns how to get a good job, a good flat, a good education for their children, because they are in connection with those who share their experiences, good and bad, with them.

The immigrant’s path to success also works for the rest of us.  Our capabilities can be measured by our networks of connections.  As the City reveals itself as a human network, where residents connect, share, and learn, we each become as capable as the most capable among us when we put what we have learned into practice.  This is the endpoint of the new urban revolution of the connected city: radical empowerment for every resident.

Consider: Everyone resident of the City you work with now brings with them the collective experience of every resident, past and present.  Only a few of them know how to put that knowledge to work.  As they learn, they share that knowledge, and it spreads, until every resident of the City has mastery of the wealth of human resources now available to each resident.

From one point of view, this is an amazing boon: City residents will know exactly how to have their local needs met.  They will have almost perfect knowledge about the right way to get things done.  That takes a burden off the City, and distributes it among the residents – where it should be, but where it never could be, before hyperconnectivity.  Residents will do the work for themselves.  You will be there to facilitate, to maintain, and mediate.

Yet this is no urban utopia.  Residents who know how to get their way will grow accustomed to having their way.  When they can not get it – when you can not give it to them – they will go to war.  Everything everyone has ever learned about how to fight City Hall is available to every resident of the City.  Dangerous capabilities, that might have been reserved for the most dire conflicts, will begin to pop up in the most ridiculous and ephemeral situations.   The residents of the City will be able to act like five year-olds who have been equipped with thermonuclear weapons.

We are all becoming vastly more capable.  Nothing can stop that.  It is a direct consequence of connection.  If the residents of the City grow too powerful, too quickly, the social fabric of the City will rip apart.  To counter this, the City must grow its capabilities in lockstep with its residents.  The City must connect, share, and learn, not just (or even first) with its residents, but with its employees.  When every City employee has the full knowledge and experiential resources of all of the employees of the City, the City will be able to confront an army of impetuous and empowered residents on its own terms.

That’s where you need to go.  That’s how you need to frame employee development, knowledge sharing, and capacity building.  Everyone who works for the City must become an expert in the whole of the City.  Yes, people will continue to specialize, and those specialties must become the shared elements that form the backbone of the City’s knowledge networks.  Everyone who works for the City must learn how to create and use these networks to increase the City’s capability.  They are the connected city.

CONCLUSION

The year 2030 is just a bit more than half a billion heartbeats away.  Most of the processes I have described are already well developed, and will complete long before 2030.  The future is already here, in bits and pieces that grow more widespread every day.  We have connected, we are sharing and learning, turning what we know into what we can do.

You have the opportunity to foster an urban environment where residents work together in close coordination to make the City an even better place to live.  Or, petty wars could flame up across our neighborhoods, as we fight one another every step of the way.  The City can not just stand by. It must step up and join the fray, using all of the resources at its disposal to shape the sharing and learning going on all around us in a way that benefits the City’s residents.  The City which does that becomes irresistible, not just to its own residents, but to everyone.  A Connected City is the envy of the world.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Little, Big

Introduction: Constructing a Child

In November of 1998, I attended a conference on technology and design in Amsterdam, and brought along two mates itching for an excuse to visit Europe. We all stayed at the flat of my good friends, Neil and Kylin. I dutifully attended the conference every day as the rest of them went out carousing through the various less-reputable quarters of Amsterdam, and we all had a great time. As Kylin tells it – given that she was the only woman on this Cook’s Tour – when we departed, we left a lingering residue of testosterone in their flat, and (if they calculated correctly) the very day after we departed for Los Angeles, they conceived their daughter Bey.

In February 1999, Neil and Kylin emailed all their friends, telling us of their plans to move – immediately – from Amsterdam to Florida. No explanation given. Through some weird intuition, I figured it out: Kylin was pregnant. I called her, and put the question to her directly. “How did you know?” she gasped. “We’ve been keeping it top secret.”

I don’t know how I knew. But I was overjoyed: I’m part of a generation who waited a long, long time to have children – my own nephews weren’t born until 2001 and 2002; none of my close friends had children in 1999. Neil and Kylin were the first.

It got me to pondering, as I ran a little thought experiment: what would the world of their daughter, still in utero, look like? What would her experience of that world be?

A month earlier, my friend Terence McKenna had challenged me to write a book. “You mouth off enough,” he suggested, “so maybe you should get it all down?” When he laid that challenge before me, I had no idea what I’d write a book about.

Somehow, as soon as I heard about Kylin’s pregnancy, I knew. I had to write a book about the world that child would grow up into, because that world would look nothing like the world I had been born into back in 1962. That child wouldn’t need this book. Her parents would.

A few months later I attended another conference, at MIT, where I heard psychologist Sherry Turkle talk about her work with young children. Turkle has been exploring how technology changes children’s behaviors, and, in this specific case, she’d taken a long look at a brand new toy: in fact, that season’s “hot” toy, the “Furby”.

Furby is an electromechanical plush toy, capable of responding to various actions by the child, but Furby also presents the child with demands – to be fed, to be played with, to be put to sleep when tired. More than interactive, the Furby presented children with some of the qualities we recognize as innate to living things. Would a small child recognize furby as inanimate, like a doll, or animate, like a pet?

From research in developmental psychology we know that children develop the categories of “inanimate” and “animate” when they’re around four years old. The development of these categories is a “constructivist” process – children do not need to be taught the difference between these two states; rather, they intuit the difference through continued interactions with animate and inanimate objects. Thus, an object, like Furby, which displays characteristics associated with both categories, should pose quite a philosophical conundrum for a small child.

Turkle put the question to these children: is Furby like your puppy? Is it like your doll? These children, little philosophical geniuses, gave her an answer she never expected to receive. They said it’s like neither of them. It is a thing itself, something in-between. They had no name for this third category between animate and inanimate, but they knew it existed, for they had direct experience of it.

This was my penny-drop moment: constructivism states that all children learn how the world works through their interactions within it. And we had suddenly changed the rules. We had infused the material world with the fairy dust of interactivity, creating the Pinocchio-like Furby, and, in so doing, at created a new ontological category. It is not a category that adults acknowledge – in fact, many adults find Furby slightly “creepy” precisely because it straddles two very familiar categories – but, in another generation, by the time these children are our age, that category will have a name, and will be accepted as a matter of course.

This is what Neil and Kylin – and, really, parents everywhere – need to know: the world has changed, the world is changing, and the world’s going to change a whole lot more. We may be the first beneficiaries of this great upwelling of technology, but the lasting benefits will be conferred upon our posterity, for it is changing the way they think. Their understanding to the world is, in some ways, utterly different from our own. And, just now, just over the last year or two, we’ve thrown a new element into the mix. We’re gracing ourselves with a new kind of connectivity – I call it “hyperconnectivity” which turbocharges some of the most essential features of human beings. This newest frontier – which did not exist even a decade ago – is what I want to focus upon this morning.

I: Who Are We?

We human beings are smart. Very smart. So smart we run the joint. But there’s a heavy price to be paid for all those brains. To start with, our heads our so big that we very nearly kill our mothers in the act of giving birth. Human births are so dangerous that we’re the only species we know of which can’t handle the act of birth alone.

We need others around – historically, other women – assisting us in the process. This point is essential to our humanity: we need other people. There is no way that a human, alone, can survive.

Yes, there are a few isolated incidence of “wolf boys” and Robinson Crusoe-types, battling against the odds in an indifferent or inimical environment, but, for far longer than we have been human, we have been social.

You can go back through the tree of life, a full eleven million years, to Proconsul, the common ancestor of gorillas, chimpanzees, bonobos and humans, and that animal was a social animal. It’s in our genes. It’s what we are. But why?

The answer is simple enough: eleven million years ago, those of our ancestors with the best social skills could most dependably count on help from others. That help was essential to their survival. That help allowed them to live long enough to pass those social genes and social behaviors along to their children. That help was essential, once our brains grew big enough to create trouble in the birth canal, for the next generation of human beings to come into the world. Cleverly, nature has crafted a species which, from the moment of the first birth pangs, must be social in order to survive. That pressure – a “selection pressure”, as it’s known in biology – is probably the essential, defining feature of humanity.

In an article in the May 17 2008 issue of New Scientist, an author rhapsodized about the end of “human exceptionalism”. Ethology and zoology have taught us that all of the behaviors we consider uniquely human do, in fact, exist broadly among other species. Whales have culture, of a sort. Chimpanzees use gestures to communicate their needs and wants, just like a child does. Dolphins have names. But each of these species, smart as they may be, deliver their young unassisted. They do not need help from their fellows to enter this world.

We are delivered by social means, and live our entire lives in a social order. What was essential at birth becomes even more important as an infant and toddler: because of our huge brains we remain helpless far longer than any other species.

A mother caring for a newborn infant has a full-time task on her hands. She can not devote her energies to finding food or shelter. Her attention is divided, but mostly focused on her child. Here again, the strong bonds of socialization create an environment where women (again) will altruistically bear some of the burden for mother and newborn. This altruism is reciprocal: as other women bear children, these mothers, with older children, will bear some of the burden for them.

This means that the mothers best able to forge strong social bonds with other women will have the most help at hand when they need it. This means, al things being equal, their children will be more likely to survive, and the chain of genes and behaviors gets passed along to another generation. This is another selection pressure which has, over millions of years, turned us into thoroughly social animals.

An interesting point to note here is that women have always had stronger selection pressures toward social behavior than men. I will come back to this.

Given that so much of our success is based upon our ability to socialize with others, and given that additional social skills confer additional advantage which increases selection success, as we evolved into our modern form – Homo Sapiens Sapiens – natural selection tended to emphasize our social characteristics. Being social has ever been the best way to get ahead.

In the last million years, as our brains grew explosively – as one scientist put it, “perhaps the most improbable event in all of evolution, anywhere” – much of the potential of all that new gray matter was put to work for social benefit. The “new brain” or neocortex, which is the most dramatically enlarged portion of the human brain, seems to be the area dedicated to our social relationships.

We know this because, in 1992, British anthropologist Robin Dunbar compared the average troop size of gorillas and chimpanzees against the average tribe sizes of humans. He found that there was a direct correlation between the volume of the neocortex in these three species and their average troop or tribe size. This value, known as “Dunbar’s Number”, is roughly 20 for gorillas, who have the smallest neocortex, about 35 for chimpanzees, and – for us lucky human beings, who have the greatest selection pressures on our social behavior – just under one hundred and fifty. We may not be entirely exceptional, but we’re doing quite well.

Essentially, inside of each one of our heads, there are a hundred and fifty other people running around. Yes, that sounds a bit crowded (particularly when they’re up partying all night long with their mates), but it’s actually imminently practical. These “little people” inside our heads are models of each person we know well: our family, our friends, our colleagues. For each of these people we build mental model which helps us to predict their behavior. (It isn’t really them, but rather, our image of them.) This predictive capability smoothes our social interactions. We know how to interact with people whom we have in our heads; with others we remain demure, reserved – in a word, predictable. Only with intimacy do we express the quirks of behavior which make us unique, only with intimacy do we take note of them in others.

We all know more than a hundred and fifty people. Some folks on FaceBook and MySpace claim thousands of “friends”. But most of these folks aren’t in our heads. There’s a simple rule you can use, to tell whether one of these folks is in your head: I call it the “sharing test”. Let’s suppose you see something – on the Web, in the newspaper, on the telly – that is so meaningful (funny, or poignant, or just so salient to whatever passions drive you), and in the next moment you think, “Wow, I know Dazza would really enjoy that.” And you flip the link along in an email. Or you send Dazza a text message with, “Hey, mate, did you see that thing just now on TEN?” And if he didn’t see it, you ring and fill him in. It’s that moment of unrestrained sharing – it feels almost automatic, and it’s entirely an essential part of what we are – which defines the most visible quality of those people inside our heads.

Every time when we share something with those little people in our head, we reinforce that relationship; we strengthen the social bonds which tie us to one another. Fifty thousand years ago this had enormous practical benefits: sharing where the best fruit grew – or the location of a predator in the tall grass – kept everyone alive and healthy. The selection pressure for sociability made us expert at sharing.

It’s interesting to watch this behavior as expressed by children; in some ways they share automatically – children love to share their experiences. In other situations – such as with a favorite toy – children must be taught to share, to override the natural selfishness of the singular animal, overruling that intrinsic behavior with the altruistic behavior of the social human. Sharing is one of the most important lessons parents teach their children, and if that lesson is poorly taught, it leaves a child at a permanent disadvantage.

While our genes make us sociable, our sharing behaviors are more software than hardware; this is why they must be taught. It takes time for any child to learn that lesson, just as it took quite a while for humans, as a species, to learn it. Geneticists know that human beings haven’t changed at all in at least 60,000 years, but civilization didn’t kick off in a meaningful way until about ten thousand years ago.

This has been an a bit of a puzzler for paleoanthropologists, but a new theory – which I also read about in New Scientist – seems to make sense of that gap: while we had the raw capacity for civilized behavior long ago, it took us 50,000 years to write the cultural software for civilization. Over those years, as we learned about ourselves and our world, our behavior changed and we taught these changes to our children, who improved upon them, passing those changes along.

In short, our entire species spent a long time in primary school (and might even have been kept back a few grades) before graduation. The incredible wealth of cultural learning – which we don’t really even reflect on, because it seems so essential and obvious to us – was painstaking developed across two thousand generations.

Our secondary studies, as a species, included that most unique of human institutions: the city. The earliest cities, such as Jericho and Çatal Höyük, already housed thousands of inhabitants – far beyond the reach of Dunbar’s Number.

That in itself presented a singular challenge for humanity, because, as near as we can tell, humans in pre-civilization lived in a perpetual state of war – the “war of all against all” – waged against all those not in their own tribes.

At the end of May 2008, we saw photos of a newly discovered tribe in the far reaches of the Amazon, who reacted to the presence of an aircraft by firing bows at it. Human beings possess an inherent xenophobia, and the boundaries those in the “in group” conform to the limits of Dunbar’s Number.

Given this, how did we all come to live together in ever-greater numbers? Simply this: the cultural software of civilization provided a greater selection advantage than that afforded by the tribal order which preceded it. Civilization is a broader form of sharing, where altruism is replaced by roles: the butcher, the baker, the candlestick maker. In civilization we share the manifold burdens of life by specializing, then we trade these specialized goods and services amongst ourselves. And it works.

Civilized human beings live in greater numbers, with greater population density, than pre-civilized cultures. It does not work perfectly: we have crime and poverty precisely because there are people in our cities who can fall through the “safety net” of civilized society. These eternal blights are the specific diseases of civilization. Yet the upsides of this broader and more diffuse form of sharing so outweighed the downsides that these evils have been tacitly acknowledged as the “price of progress.”

So things continued, merrily, for the last ten thousand years. Cities rose and fell; empires rose and fell; cultures and languages and entire peoples rose up suddenly, only to vanish just as quickly. All along the way, we continued adding to our cultural software. We learned – fairly early on – to record our learning in permanent form. We codified the essential elements of the software of civilization in laws and commandments.

We experimented with every form of human social organization, from the military dictatorship of Sparta, to the centralized bureaucracy of China, to the open democracy of Athens, to the chaotic anarchism of the Paris Commune. At each step along the way, we passed these lessons along, in a unbroken chain, to the generations that followed.

We are the children of nearly five hundred generations of civilization. The lessons learned over that immense span of time have brought us to the threshold of a revolution as comprehensive as that which obsolesced our tribal natures and replaced them with more civilized forms. Once again, the selection pressures of sociability force us into a narrow passage, toward another birth.

II: Where Are We Going?

We know that our amazingly comprehensive social skills are located in the newest part of our brain; we also know that they are among the last capabilities to mature during our cognitive development. Our sociability depends upon so much: a strong command of language, the ability to empathize and sympathize, the ability to consider the wants and needs of others, the ability to give freely of one’s self – altruism. At any point this complex and delicate process can be interrupted, by nature or by nurture.

My own nephew, Alexander, was diagnosed with an Autism Spectrum Disorder at the end of 2005. For leading-edge brain researchers, autism represents a natural failure of the brain’s inherent capability to model the behavior of others. The hundred and fifty people running around inside of the head of someone with an Autism Spectrum Disorder are shaped differently than the ones running about in mine; they still exist, but they are not (in an admittedly subjective assessment) as complete. Now that we know roughly what autism is, we work with these children intensively, because, while they lack certain inherent features we associate with normalcy, these children, if diagnosed early enough, can learn to become much more sensitive to the world-views and feelings of others.

My nephew attended a state-of-the-art pre-school in his San Diego suburb, where autistic children and “normal” children (such as his year-younger brother, Andrew) mix freely, because it is now known that the autistic children can and will learn necessary social skills through this continuous interaction. Alexander has now been mainstreamed, while my younger nephew remains as a “peer” in this school, showing other children how to be a fully socialized human being.

Then there are the children who have suffered neglect or abuse. Not having been nurtured themselves, they have not learned how to nurture others. This deficit manifests as emotional withdrawal, or in anti-social behaviors. Children who have not received love can not find it within themselves to love others. It is not that love is learned, per se, but rather, that we learn to recognize it as others demonstrate it toward us. The drive to connect with another human being, although entirely inherent, can be so confused, or so atrophied through disuse (these areas of the brain, if under-stimulated, will die away, leaving the child with a permanent deficit), that the child essentially becomes locked into a solitary world, unable to initiate or maintain the social relationships essential to success.

None of us are perfect; all of us feel embarrassment and disappointment and awkwardness in a range of social situations. Yet those sensations, of themselves, are proof our normalcy: we sense our social shortcomings. We had little awareness of our social nature when we were young. Only as we matured, turning the corner into tweenhood, did we rise into an awareness of the strong social bonds which form the largest part of our experience as human beings. For each and every one of us, this is a painful experience.

The brain, furiously making connections between regions which have been developing from before birth, integrates our comprehensive understanding of human behavior, our own emotional state, and our perceptions of the actions and emotions of others to create a model of how we are viewed by others, our “social standing”. It is this that natural selection has driven us to optimize: individuals with the highest social standing get the lion’s share of attention, affection and resources.

In particular, this burden lies heaviest on young women, who have the additional selection pressure (now more-or-less vestigial) driving them to form the social bonds of altruism with their peers which would, in prehistoric times, lead to greater help with childbearing and child-rearing. Young women emerge into a social consciousness so rich and so complex it makes young men look nearly autistic in comparison.

It is the reason why young woman invest themselves so wholly in their looks, in their friends, in their cliques, in the “in group” and the “out group”. Films like Heathers (one of my personal favorites) and Mean Girls tell tales as old as humanity: the rise into social consciousness of that most social of all the animals on the planet – the young woman.

It also provides some explanation for why young women are often emotionally overwrought. It isn’t just hormones. It’s the rising awareness of a vast social game that they don’t know how to play, with rules taught only through trial and error. Every mistake is potentially fatal, every success fleeting. And each of these moments of singular significance are amplified by a genetic imperative, a drive to connect, which leaves them helpless. Resistance is futile, and engagement only brings more learning, and more pain.

Oh, and we just made things a whole lot more complicated.

This generation of young adults, coming of age just now, have access to the best tools for connection and communication created by our species.

A few years ago, these kids, bounded by proximity and temporality, took their cues from their immediate peers. But now these connections can be forged via text messages, or MySpace pages, or YouTube videos, and so on. An average fifteen year-old girl might send and receive a hundred text messages in a single day and think nothing of it. Her inherent drive to connect has been freed from space and time; she can reach out everywhere, at any time; she can be reached anywhere, anytime. We have added a technological dimension – an intense and comprehensive acceleration – to a wholly natural process.

During the two hundred years of the industrial revolution, we amplified our capability for physical work. Steam engines and electric motors replaced muscle. As we moved from physical labor to monitoring and control of our machines, our capacity for work exploded, transforming the world. Still, these changes were entirely external. They did not affect our nature as social beings, but simply extended our physical capabilities. Now – just now – we have moved beyond the physical extension of our capabilities into a comprehensive amplification of our social nature. The mobile and the Internet are already transforming the human world as utterly as the steam engine transformed the landscape; but this transformation is happening in eighth-time.

The transition to industrialization, which took about a hundred years to complete, seems slow when compared to the rise of the Human Network, which will take about fifteen years, end-to-end.

Already, half of humanity owns a mobile phone; within about three years, three-quarters of the planet will own a mobile. That’s everyone except for the most desperately poor among us. No one, anywhere, expected this, because no one reckoned on this most basic of all human drives – the need to connect. The mobile is the steam engine, the electric motor, and the internal combustion engine of the 21st century: every bit of the potential framed by each of these enormous innovations now rests comfortably in the palm of three and a half billion hands.

Getting the tools for the amplification of our social natures is only half the story. That’s just hardware. What really counts is the software. And that’s why we turn, at the end of this tale, to Bey, the child conceived by Neil and Kylin, back in the last days of 1998.

III: Who Will Lead the Way?

Hardware is not enough. We spent fifty thousand years in idle, despite the best cognitive hardware on the planet, before anything truly interesting occurred. We are ensuring that every single person on Earth has a connection to the Human Network, but that doesn’t mean any of us know how to use it. Still, we are learning. And humans excel at learning from one another.

A recent study run with young chimps and toddlers showed that the chimps surpassed the toddlers in their cognitive capabilities, but that the toddlers far surpassed the chimpanzees in their ability to “ape” behavior. Humans learn by mimesis: the observation of our parents, our peers, our mentors and teachers. (Which is why the injunction, “Do as I say, not as I do,” never works.) As such, we closely observe each other to learn what works, and we copy it. This mimetic behavior, which used to be constrained by distance, has itself become a global phenomenon. Whatever works gets copied widely. It could be a good behavior, or a bad behavior: the only metric is the success of the behavior. If it achieves its ends, it will be observed and copied, widely and nearly instantaneously.

It took us two thousand generations to build up the cognitive software for civilization, as individual tribes made the same discoveries, independently, but lacked the means to share them. Even the diffusion of agriculture depended more on the migration of whole peoples than the dissemination of knowledge.

Today, a clever tip finds its way onto YouTube in minutes, a rumor can sweep through a nation in the time it takes to forward a text message, and a blog post can cut billions off the valuation of a publicly listed firm. We are “hyperconnected,” but, newly delivered into this state of being, we are still quite immature.

We know how to be social beings, but never before have we been globally and instantaneously social. For this reason, we are learning – and each of are intensely involved in this education. We are learning from ourselves, applying the lessons of our own socialization, to see if these lessons work in this new world. That’s pure constructivism. We are learning from each other, watching our peers as intently as any young woman would, when desperately trying to defend her position in an ever-more-competitive social circle. That’s pure mimesis. Together they’re a potent combination, and, when multiplied by the accelerator of the Human Network, it means we’re learning very rapidly indeed. Learning is never complete: ignorance is a permanent feature of the human condition. That said, competence can come quickly, when the students are wholly engaged in learning. As we are.

This means that, in another two or three years, when Bey is old enough to get her first mobile phone, at precisely the moment that she begins to awaken to her intense cognitive capabilities as social animal, those abilities will have been so comprehensively rewritten and transformed by the new software of sociability that she will find herself suddenly both intensely empowered and, most likely, entirely overwhelmed.

Bey will be among the first children who become socially aware within a world where the definition, rules and operating principles of the social universe have utterly changed. That transformation will not be complete, by any means, but it will be far enough along that the basic features and outlines of 21st century social civilization will be present.

This is the only social world that she will ever know. For her, social connections will not end with the classroom and the home. Social connectivity is already edging toward a state where everyone is directly connected to everyone else, all six point eight billion of us, a world where each of us can directly forge a relationship with everyone else. Bey will not know any of the boundaries we consider natural and solid, the boundaries of the classroom, the suburb, the family, or the nation: under the pressure of this intense hyperconnectivity, all of those boundaries dissolve, or are blown over. Only connect. Connection is all that matters. The social instinct, hyperempowered and taken to an entirely new level by hyperconnectivity, is rewriting the rules of culture.

This world looks utterly alien to us, yet it is already here. Author William Gibson says, “The future is already here, it’s just not evenly distributed.” We have moments of hyperconnectivity – as in the thirty-six hours after the Sichuan earthquake, when text messaging and other tools for hyperconnectivity spontaneously created a Human Network, sharing news of the tragedy and working to locate missing people. Such moments are becoming more frequent, gradually merging into a continuum.

But what about Bey? What lessons can we offer her? She will learn everything she can from everyone, everywhere. She will span the planet for best practices in sociability, because she can, and because she must. She will outpace us in every way, because the simultaneous emergence of the Human Network and her own social capabilities makes her potent in ways we can’t wholly predict. Her powers will be greater, but that also means that her crash will be more spectacular – apocalyptic, really – when she tries something, and fails.

We do know this: just as Furby created a new ontological class of being, a nether zone between animate and inanimate which children instinctively recognized and embraced, Bey will be living a new ontology of sociability, connection and relationship. These girls, just on the verge of becoming young women, will lead the way into this new world. They will be the first masters of the Human Network.

I want to close this essay with both a warning — and a hope. The warning is simply this: these young women will be vastly more powerful than we are. Harnessing the immense energies of the Human Network will be, quite literally, child’s play to them. If they sense they are being wronged, and can build a network of peers who concur in this assessment, you will need to watch out, because they will have the capacity to destroy you with a word. We already see students threatening educators with damage to their reputations; multiply that a billion-fold and you can sense the potential for catastrophe. I am not saying that this will inevitably happen, only that it can.

At the same time, despite their thermonuclear potential, it would be a mistake to handle these kids too delicately. Children are all passion, but lack wisdom. Adults have plenty of wisdom, but, all too often, we lack passion.

We need to build strong relationships with these children, using the Human Network of hyperconnectivity, so that each of us can infect the other. We need their passion to move forward without fear in a world where the human universe has shifted beneath our feet. They desperately need our wisdom to guide them into healthy and stable relationships throughout the Human Network. To do this, we need to bring these kids inside our heads, and we need to get ourselves into theirs, so that, together, we can make sense of a world so new, and so different, that we all seem but little children in a big world.

Transforming Governance

My keynote address to the South Australian State Government conference, “The Digital Media Revolution”, in Adelaide, South Australia, 26 April 2008.

Posted in Uncategorized | 1 Reply

Synopsis: Sharing :: Hyperconnectivity

The Day TV Died

On the 18th of October in 2004, a UK cable channel, SkyOne, broadcast the premiere episode of Battlestar Galactica, writer-producer Ron Moore’s inspired revisioning of the decidedly campy 70s television series. SkyOne broadcast the episode as soon as it came off the production line, but its US production partner, the SciFi Channel, decided to hold off until January – a slow month for television – before airing the episodes. The audience for Battlestar Galactica, young and technically adept, made digital recordings of the broadcasts as they went to air, cut out the commercials breaks, then posted them to the Internet.

For an hour-long television programme, a lot of data needs to be dragged across the Internet, enough to clog up even the fastest connection. But these young science fiction fans used a new tool, BitTorrent, to speed the bits on their way. BitTorrent allows a large number of computers (in this case, over 10,000 computers were involved) to share the heavy lifting. Each of the computers downloaded pieces of Battlestar Galactica, and as each got a piece, they offered it up to any other computer which wanted a copy of that piece. Like a forest of hands each trading puzzle pieces, each computer quickly assembled a complete copy of the show.

All of this happened within a few hours of Battlestar Galactica going to air. That same evening, on the other side of the Atlantic, American fans watched the very same episode that their fellow fans in the UK had just viewed. They liked what they saw, and told their friends, who also downloaded the episode, using BitTorrent. Within just a few days, perhaps a hundred thousand Americans had watched the show.

US cable networks regularly count their audience in hundreds of thousands. A million would be considered incredibly good. Executives for SciFi Channel ran the numbers and assumed that the audience for this new and very expensive TV series had been seriously undercut by this international trafficking in television. They couldn’t have been more wrong. When Battlestar Galactica finally aired, it garnered the biggest audiences SciFi Channel had ever seen – well over 3 million viewers.

How did this happen? Word of mouth. The people who had the chops to download Battlestar Galactica liked what they saw, and told their friends, most of whom were content to wait for SciFi Channel to broadcast the series. The boost given the series by its core constituency of fans helped it over the threshold from cult classic into a genuine cultural phenomenon. Battlestar Galactica has become one of the most widely-viewed cable TV series in history; critics regularly lavish praise on it, and yes, fans still download it, all over the world.

Although it might seem counterintuitive, the widespread “piracy” of Battlestar Galactica was instrumental to its ratings success. This isn’t the only example. BBC’s Dr. Who, leaked to BitTorrent by a (quickly fired) Canadian editor, drummed up another huge audience. It seems, in fact, that “piracy” is good. Why? We live in an age of fantastic media oversupply: there are always too many choices of things to watch, or listen to, or play with. But, if one of our friends recommends something, something they loved enough to spend the time and effort downloading, that carries a lot of weight.

All of this sharing of media means that the media titans – the corporations which produce and broadcast most of the television we watch – have lost control over their own content. Anything broadcast anywhere, even just once, becomes available everywhere, almost instantaneously. While that’s a revolutionary development, it’s merely the tip of the iceberg. The audience now has the ability to share anything they like – whether produced by a media behemoth, or made by themselves. YouTube has allowed individuals (some talented, some less so) reach audiences numbering in hundreds of millions. The attention of the audience, increasingly focused on what the audience makes for itself, has been draining ratings away from broadcasters, a drain which accelerates every time someone posts something funny, or poignant, or instructive to YouTube.

The mass media hasn’t collapsed, but it has been hollowed out. The audience occasionally tunes in – especially to watch something newsworthy, in real-time – but they’ve moved on. It’s all about what we’re saying directly to one another. The individual – every individual – has become a broadcaster in his or her own right. The mechanics of this person-to-person sharing, and the architecture of these “New Networks”, are driven by the oldest instincts of humankind.

The New Networks

Human beings are social animals. Long before we became human – or even recognizably close – we became social. For at least 11 million years, before our ancestors broke off from the gorillas and chimpanzees, we cultivated social characteristics. In social groups, these distant forbears could share the tasks of survival: finding food, raising young, and self-defense. Human babies, in particular, take many years to mature, requiring constantly attentive parenting – time stolen away from other vital activities. Living in social groups helped ensure that these defenseless members of the group grew to adulthood. The adults who best expressed social qualities bore more and healthier children. The day-to-day pressures of survival on the African savannahs drove us to be ever more adept with our social skills.

We learned to communicate with gestures, then (no one knows just how long ago) we learned to speak. Each step forward in communication reinforced our social relationships; each moment of conversation reaffirms our commitment to one another, every spoken word an unspoken promise to support, defend and extend the group. As we communicate, whether in gestures or in words, we build models of one another’s behavior. (This is why we can judge a friend’s reaction to some bit of news, or a joke, long before it comes out of our mouths.) We have always walked around with our heads full of other people, a tidy little “social network,” the first and original human network. We can hold about 150 other people in our heads (chimpanzees can manage about 30, gorillas about 15, but we’ve got extra brains they don’t to help us with that), so, for 90% of human history, we lived in tribes of no more than about 150 individuals, each of us in constant contact, a consistent communication building and reinforcing bonds which would make us the most successful animals on Earth. We learned from one another, and shared whatever we learned; a continuity of knowledge passed down seamlessly, generation upon generation, a chain of transmission that still survives within the world’s indigenous communities. Social networks are the gentle strings which connect us to our origins.

This is the old network. But it’s also the new network. A few years ago, researcher Mizuko Ito studied teenagers in Japan, to find that these kids – all of whom owned mobile telephones – sent as many as a few hundred text messages, every single day, to the same small circle of friends. These messages could be intensely meaningful (the trials and tribulations of adolescent relationships), or just pure silliness; the content mattered much less than that constant reminder and reinforcement of the relationship. This “co-presence,” as she named it, represents the modern version of an incredibly ancient human behavior, a behavior that had been unshackled by technology, to span vast distances. These teens could send a message next door, or halfway across the country. Distance mattered not: the connection was all.

In 2001, when Ito published her work, many dismissed her findings as a by-product of those “wacky Japanese” and their technophile lust for new toys. But now, teenagers everywhere in the developed world do the same thing, sending tens to hundreds of text messages a day. When they run out of money to send texts (which they do, unless they have very wealthy parents), they simply move online, using instant messaging and MySpace and other techniques to continue the never-ending conversation.

We adults do it too, though we don’t recognize it. Most of us who live some of our lives online, receive a daily dose of email: we flush the spam, answer the requests and queries of our co-workers, deal with any family complaints. What’s left over, from our friends, more and more consists of nothing other than a link to something – a video, a website, a joke – somewhere on the Internet. This new behavior, actually as old as we are, dates from the time when sharing information ensured our survival. Each time we find something that piques our interest, we immediately think, “hmm, I bet so-and-so would really like this.” That’s the social network in our heads, grinding away, filtering our experience against our sense of our friends’ interests. We then hit the “forward” button, sending the tidbit along, reinforcing that relationship, reminding them that we’re still here – and still care. These “Three Fs” – find, filter and forward – have become the cornerstone of our new networks, information flowing freely from person-to-person, in weird and unpredictable ways, unbounded by geography or simultaneity (a friend can read an email weeks after you send it), but always according to long-established human behaviors.

One thing is different about the new networks: we are no longer bounded by the number of individuals we can hold in our heads. Although we’ll never know more than 150 people well enough for them to take up some space between our ears (unless we grow huge, Spock-like minds) our new tools allow us to reach out and connect with casual acquaintances, or even people we don’t know. Our connectivity has grown into “hyperconnectivity”, and a single individual, with the right message, at the right time, can reach millions, almost instantaneously.

This simple, sudden, subtle change in culture has changed everything.

The Nuclear Option

On the 12th of May in 2008, a severe earthquake shook a vast area of southeast Asia, centered in the Chinese state of Sichuan. Once the shaking stopped – in some places, it lasted as long as three minutes – people got up (when they could, as may lay under collapsed buildings), dusted themselves off, and surveyed the damage. Those who still had power turned to their computers to find out what had happened, and share what had happened to them. Some of these people used so-called “social messaging services”, which allowed them to share a short message – similar to a text message – with hundreds or thousands of acquaintances in their hyperconnected social networks.

Within a few minutes, people on every corner of the planet knew about the earthquake – well in advance of any reports from Associated Press, the BBC, or CNN. This network of individuals, sharing information each other through their densely hyperconnected networks, spread the news faster, more effectively, and more comprehensively than any global broadcaster.

This had happened before. On 7 July 2005, the first pictures of the wreckage caused by bombs detonated within London’s subway system found their way onto Flickr, an Internet photo-sharing service, long before being broadcast by BBC. A survivor, waking past one of the destroyed subway cars, took snaps from her mobile and sent them directly on to Flickr, where everyone on the planet could have a peek. One person can reach everyone else, if what they have to say (or show) merits such attention, because that message, even if seen by only one other person, will be forwarded on and on, through our hyperconnected networks, until it has been received by everyone for whom that message has salience. Just a few years ago, it might have taken hours (or even days) for a message to traverse the Human Network. Now it happens a few seconds.

Most messages don’t have a global reach, nor do they need one. It is enough that messages reach interested parties, transmitted via the Human Network, because just that alone has rewritten the rules of culture. An intemperate CEO screams at a consultant, who shares the story through his network: suddenly, no one wants to work for the CEO’s firm. A well-connected blogger gripes about problems with his cable TV provider, a story forwarded along until – just a half-hour later – he receives a call from a vice-president of that company, contrite with apologies and promises of an immediate repair. An American college student, arrested in Egypt for snapping some photos in the wrong place at the wrong time, text messages a single word – “ARRESTED” – to his social network, and 24 hours later, finds himself free, escorted from jail by a lawyer and the American consul, because his network forwarded this news along to those who could do something about his imprisonment.

Each of us, thoroughly hyperconnected, brings the eyes and ears of all of humanity with us, wherever we go. Nothing is hidden anymore, no secret safe. We each possess a ‘nuclear option’ – the capability to go wide, instantaneously, bringing the hyperconnected attention of the Human Network to a single point. This dramatically empowers each of us, a situation we are not at all prepared for. A single text message, forwarded perhaps a million times, organized the population of Xiamen, a coastal city in southern China, against a proposed chemical plant – despite the best efforts of the Chinese government to sensor the message as it passed through the state-run mobile telephone network. Another message, forwarded around a community of white supremacists in Sydney’s southern suburbs, led directly to the Cronulla Riots, two days of rampage and attacks against Sydney’s Lebanese community, in December 2005.

When we watch or read stories about the technologies of sharing, they almost always center on recording companies and film studios crying poverty, of billions of dollars lost to ‘piracy’. That’s a sideshow, a distraction. The media companies have been hurt by the Human Network, but that’s only a minor a side-effect of the huge cultural transformation underway. As we plug into the Human Network, and begin to share that which is important to us with others who will deem it significant, as we learn to “find the others”, reinforcing the bonds to those others every time we forward something to them, we dissolve the monolithic ties of mass media and mass culture. Broadcasters, who spoke to millions, are replaced by the Human Network: each of us, networks in our own right, conversing with a few hundred well-chosen others. The cultural consensus, driven by the mass media, which bound 20th-century nations together in a collective vision, collapses into a Babel-like configuration of social networks which know no cultural or political boundaries.

The bomb has already dropped. The nuclear option has been exercised. The Human Network brought us together, and broke us apart. But in these fragments and shards of culture we find an immense vitality, the protean shape of the civilization rising to replace the world we have always known. It all hinges on the transition from sharing to knowing.

The Nuclear Option

I.

One of the things I find the most exhilarating about Australia is the relative shallowness of its social networks. Where we’re accustomed to hearing about the “six degrees of separation” which connect any two individuals on Earth, in Australia we live with social networks which are, in general, about two levels deep. If I don’t know someone, I know someone who knows that someone.

While this may be slightly less true across the population as a whole (I may not know a random individual living in Kalgoorlie, and might not know someone who knows them) it is specifically quite true within any particular professional domain. After four years living in Sydney, attending and speaking at conferences throughout the nation, I’ve met most everyone involved in the so-called “new” media, and a great majority of the individuals involved in film and television production.

The most consequential of these connections sit in my address book, my endless trail of email, and my ever-growing list of Facebook friends. These connections evolve into relationships as we bat messages back and forth: emails and text messages, and links to the various interesting tidbits we find, filter and forward to those we imagine will gain the most from this informational hunting & gathering. Each transmission reinforces the bond between us – or, if I’ve badly misjudged you, ruptures that bond. The more we share with each other, the stronger the bond becomes. It becomes a covert network; invisible to the casual observer, but resilient and increasingly important to each of us. This is the network that carries gossip – Australians are great gossipers – as well as insights, opportunities, and news of the most personal sort.

In a small country, even one as geographically dispersed as Australia, this means that news travels fast. This is interesting to watch, and terrifying to participate in, because someone’s outrageous behavior is shared very quickly through these networks. Consider Roy Greenslade’s comments about Andrew Jaspan, at Friday’s “Future of Journalism” conference, which made their way throughout the nation in just a few minutes, via “live” blogs and texts, getting star billing in Friday’s Crikey. While Greenslade damned Jaspan, I was trapped in studio 21 at ABC Ultimo, shooting The New Inventors, yet I found out about his comments almost the moment I walked off set. Indeed, connected as I am to individuals such as Margaret Simmons and Rosanne Bersten (both of whom were at the conference) it would have been more surprising if I hadn’t learned about it.

All of this means that we Australians are under tremendous pressure to play nice – at least in public. Bad behavior (or, in this case, a terrifyingly honest assessment of a colleague’s qualifications) so excites the network of connections that it propagates immediately. And, within our tight little professional social networks, we’re so well connected that it propagates ubiquitously. Everyone to whom Greenslade’s comments were salient heard about them within a few minutes after he uttered them. There was a perfect meeting between the message and its intended audience.

That is a new thing.

II.

Over the past few months, I have grown increasingly enamoured with one of the newest of the “Web2.0” toys, a site known as “Twitter”. Twitter originally billed itself as a “micro-blogging” site: you can post messages (“tweets”, in Twitter parlance) of no more than 140 characters to Twitter, and these tweets are distributed to a list of “followers”. Conversely, you are sent the tweets created by all of the individuals whom you “follow”. One of the beauties of Twitter is that it is multi-modal; you can send a tweet via text message, through a web page, or from an ever-growing range of third-party applications. Twitter makes it very easy for a bright young programmer to access Twitter’s servers – which means people are now doing all sorts of interesting things with Twitter.

At the moment, Twitter is still in the domain of the early-adopters. Worldwide, there are only about a million Twitter users, with about 200,000 active in any week – and these folks are sending an average of three million tweets a day. That may not sound like many people, but these 200,000 “Twitteratti” are among the thought-leaders in new media. Their influence is disproportionate. They may not include the CIOs of the largest institutions in the world, but they do include the folks whom those CIOs turn to for advice. And whom do these thought-leaders turn to for advice? Twitter.

A simple example: When I sat down to write this, I had no idea how many Twitter users there are at present, so I posted the following tweet:

Question: Does anyone know how many Twitter users (roughly) there are at present? Thanks!

Within a few minutes, Stilgherrian (who writes for Crikey) responded with the following:

There are 1M+ Twitter users, with 200,000 active in any week.

Stilgherrian also passed along a link to his blog where he discusses Twitter’s statistics, and muses upon his increasing reliance on the service.

Before I asked the Twitteratti my question, I did the logical thing: I searched Google. But Google didn’t have any reasonably recent results – the most recent dated from about a year ago. No love from Google. Instead, I turned to my 250-or-so Twitter followers, and asked them. Given my own connectedness in the new media community in Australia, I have, through Twitter, access to an enormous reservoir of expertise. If I don’t know the answer to a question – and I can’t find an answer online – I do know someone, somewhere, who has an answer.

Twitter, gossipy, noisy, inane and frequently meaningless, acts as my 21st-century brain trust. With Twitter I have immediate access to a broad range of very intelligent people, whose interests and capabilities overlap mine enough that we can have an interesting conversation, but not so completely that we have nothing to share with one another. Twitter extends my native capability by giving me a high degree of continuous connectivity with individuals who complement those capabilities.

That’s a new thing, too.

William Gibson, the science fiction author and keen social observer, once wrote, “The street finds its own use for things, uses the manufacturers never intended.” The true test of the value of any technology is, “Does the street care?” In the case of Twitter, the answer is a resounding “Yes!”. This personal capacity enhancement – or, as I phrase it, “hyperempowerment” – is not at all what Twitter was designed to do. It was designed to facilitate the posting of short, factual messages. The harvesting of the expertise of my oh-so-expert social network is a behavior that grew out of my continued interactions with Twitter. It wasn’t planned for, either by Twitter’s creators, or by me. It just happened. And not every Twitter user puts Twitter to this use. But some people, who see what I’m doing, will copy my behavior (which probably didn’t originate with me, though I experienced a penny-drop moment when I realized I could harvest expertise from my social network using Twitter), because it is successful. This behavior will quickly replicate, until it’s a bog-standard expectation of all Twitter users.

III.

On Monday morning, before I sat down to write, I checked the morning’s email. Several had come in from individuals in the US, including one from my friend GregoryP, who spent the last week sweating through the creation of a presentation on the value of social media. As many of you know, companies often hire outside consultants, like GregoryP, when the boss needs to hear something that his or her underlings are too afraid say themselves. Such was the situation that GregoryP walked into, with sadly familiar results. From his blog:

As for that “secret” company – it seems fairly certain to me that I won’t be working for any dot-com pure plays in the near future. As I touched on in my Twitter account, my presentation went well but the response to it was something more than awful. As far as I could tell, the generally-absent Director of the Company wasn’t briefed on who I was or why I was there, exactly – she took the opportunity to impugn my credibility and credentials and more or less acted as if I’d tried to rip her company off.

I immediately read GregoryP’s Twitter stream, to find that he had been used, abused and insulted by the MD in question.

Which was a big, big mistake.

GregoryP is not very well connected on Twitter. He’s only just started using it. A fun little website, TweetWheel, shows all nine of his connections. But two of his connections – to Raven Zachary and myself – open into a much, much wider world of Twitteratti. Raven has over 600 people following his tweets, and I have over 250 followers. Both of us are widely-known, well-connected individuals. Both of us are good friends with GregoryP. And both of us are really upset at bad treatment he received.

Here’s how GregoryP finished off that blog post:

Let’s just say it’ll be a cold day in hell before I offer any help, friendly advice or contacts to these people. I’d be more specific about who they are but I wouldn’t want to give them any more promotion than I already have.

What’s odd here – and a sign that the penny hasn’t really dropped – is that GregoryP doesn’t really understand that “promotion” isn’t so much a beneficial influence as a chilling threat to lay waste to this company’s business prospects. This MD saw GregoryP standing before her, alone and defenseless, bearing a message that she was of no mind to receive, despite the fact that her own staff set this meeting up, for her own edification.

What this unfortunate MD did not see – because she does not “get” social media – was Raven and myself, directly connected to GregoryP. Nor does she see the hundreds of people we connect directly to, nor the tens of thousands connected directly to them. She thought she was throwing her weight around. She was wrong. She was making an ass out of herself, behaving very badly in a world where bad behavior is very, very hard to hide.

All GregoryP need do, to deliver the coup de grace, is reveal the name of the company in question. As word spread – that is, nearly instantaneously – that company would find it increasingly difficult to recruit good technology consultants, programmers, and technology marketers, because we all share our experiences. Sharing our experiences improves our effectiveness, and prevents us from making bad decisions. Such as working with this as-yet-unnamed company.

The MD walked into this meeting believing she held all the cards; in fact, GregoryP is the one with his finger poised over the launch button. With just a word, he could completely ruin her business. This utter transformation in power politics – “hyperconnectivity” leading to hyperempowerment – is another brand new thing. This brand new thing is going to change everything it touches, every institution and every relationship any individual brings to those institutions. Many of those institutions will not survive, because their reputations will not be able to withstand the glare of hyperconnectivity backed by the force of hyperempowerment.

The question before us today is not, “Who is the audience?”, but rather, “Is there anyone who isn’t in the audience?” As you can now see, a single individual – anywhere – is the entire audience. Every single person is now so well-connected that anything which happens to them or in front of them reaches everyone it needs to reach, almost instantaneously.

This newest of new things has only just started to rise up and flex its muscles. The street, ever watchful, will find new uses for it, uses that corporations, governments and institutions of every stripe will find incredibly distasteful, chaotic, and impossible to manage.

Unevenly Distributed:Production Models for the 21st Century

I. The Wheels Fall Off the Cart

In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.

This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.

Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.

Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.

In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.

This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.

In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.

Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.

Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.

It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.

Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.

At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.

BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.

That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.

It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.

In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.

But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.

II. The Barbarian Hordes Storm the Walls

Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.

Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.

This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.

In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.

Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.

For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.

This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.

That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”

When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.

There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.

This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.

The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.

What then, is to be done?

III. And The Penny Drops

It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.

This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.

In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.

At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.

Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.

The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.

Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.

I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.

Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.

What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.

None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.

So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.

One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.