Sharing Power (Global Edition)

My keynote for the Personal Democracy Forum, in New York.

Introduction: War is Over (if you want it)

Over the last year we have lived through a profound and perhaps epochal shift in the distribution of power. A year ago all the talk was about how to mobilize Facebook users to turn out on election day. Today we bear witness to a ‘green’ revolution, coordinated via Twitter, and participate as the Guardian UK crowdsources the engines of investigative journalism and democratic oversight to uncover the unpleasant little secrets buried in the MPs expenses scandal – secrets which the British government has done everything in its power to withhold.

We’ve turned a corner. We’re on the downward slope. It was a long, hard slog to the top – a point we obviously reached on 4 November 2008 – but now the journey is all about acceleration into a future that looks almost nothing like the past. The configuration of power has changed: its distribution, its creation, its application. The trouble with circumstances of acceleration is that they go hand-in-hand with a loss of control. At a certain point our entire global culture is liable to start hydroplaning, or worse, will go airborne. As the well-oiled wheels of culture leave the roadbed of civilization behind, we can spin the steering wheel all we want. Nothing will happen. Acceleration has its own rationale, and responds neither to reason nor desire. Force will meet force. Force is already meeting force.

What happens now, as things speed up, is a bit like what happens in the guts of CERN’s Large Hadron Collider. Different polities and institutions will smash and reveal their inner workings, like parts sprung from crashed cars. We can learn a lot – if we’re clever enough to watch these collisions as they happen. Some of these particles-in-collision will recognizably be governments or quasi-governmental organizations. Some will look nothing like them. But before we glory, Ballard-like, in the terrible beauty of the crash, we should remember that these institutions are, first and foremost, the domain of people, individuals ill-prepared for whiplash or a sudden impact with the windshield. No one is wearing a safety belt, even as things slip noticeably beyond control. Someone’s going to get hurt. That much is already clear.

What we urgently need, and do not yet have, is a political science for the 21st century. We need to understand the autopoietic formation of polities, which has been so accelerated and amplified in this era of hyperconnectivity. We need to understand the mechanisms of knowledge sharing among these polities, and how they lead to hyperintelligence. We need to understand how hyperintelligence transforms into action, and how this action spreads and replicates itself through hypermimesis. We have the words – or some of them – but we lack even an informal understanding of the ways and means. As long as this remains the case, we are subject to terrible accidents we can neither predict nor control. We can end the war between ourselves and our times. But first we must watch carefully. The collisions are mounting, and they have already revealed much. We have enough data to begin to draw a map of this wholly new territory.

I: The First Casualty of War

Last month saw an interesting and unexpected collision. Wikipedia, the encyclopedia created by and for the people, decreed that certain individuals and a certain range of IP addresses belonging to the Church of Scientology would hereafter be banned from the capability to edit Wikipedia. This directive came from the Arbitration Committee of Wikipedia, which sounds innocuous, but is in actuality the equivalent the Supreme Court in the Wikipediaverse.

It seems that for some period of time – probably stretching into years – there have been any number of ‘edit wars’ (where edits are made and reverted, then un-reverted and re-reverted, ad infinitum) around articles concerning about the Church of Scientology and certain of the personages in the Church. These pages have been subject to fierce edit wars between Church of Scientology members on one side, critics of the Church on the other, and, in the middle, Wikipedians, who attempted to referee the dispute, seeking, above all, to preserve the Neutral Point-of-View (NPOV) that the encyclopedia aspires to in every article. When this became impossible – when the Church of Scientology and its members refused to leave things alone – a consensus gradually formed within the tangled adhocracy of Wikipedia, finalized in last month’s ruling from the Arbitration Committee. For at least six months, several Church of Scientology members are banned by name, and all Church computers are banned from making edits to Wikipedia.

That would seem to be that. But it’s not. The Church of Scientology has been diligent in ensuring that the mainstream media (make no mistake, Wikipedia is now a mainstream medium) do not portray characterizations of Scientology which are unflattering to the Church. There’s no reason to believe that things will simply rest as they are now, that everyone will go off and skulk in their respective corners for six months, like children given a time-out. Indeed, the Chairman of Scientology, David Miscavidge, quickly issued a press release comparing the Wikipedians to Nazis, asking, “What’s next, will Scientologists have to wear yellow, six-pointed stars on our clothing?”

How this skirmish plays out in the months and years to come will be driven by the structure and nature of these two wildly different organizations. The Church of Scientology is the very model of a modern religious hierarchy; all power and control flows down from Chairman David Miscavidge through to the various levels of Scientology. With Wikipedia, no one can be said to be in charge. (Jimmy Wales is not in charge of Wikipedia.) The whole things chugs along as an agreement, a social contract between the parties participating in the creation and maintenance of Wikipedia. Power flows in Wikipedia are driven by participation: the more you participate, the more power you’ll have. Power is distributed laterally: every individual who edits Wikipedia has some ultimate authority.

What happens when these two organizations, so fundamentally mismatched in their structures and power flows, attempt to interact? The Church of Scientology uses lawsuits and the threat of lawsuits as a coercive technique. But Wikipedia has thus far proven immune to lawsuits. Although there is a non-profit entity behind Wikipedia, running its servers and paying for its bandwidth, that is not Wikipedia. Wikipedia is not the machines, it is not the bandwidth, it is not even the full database of articles. Wikipedia is a social agreement. It is an agreement to share what we know, for the greater good of all. How does the Church of Scientology control that? This is the question that confronts every hierarchical organization when it collides with an adhocracy. Adhocracies present no control surfaces; they are at once both entirely transparent and completely smooth.

This could all get much worse. The Church of Scientology could ‘declare war’ on Wikipedia. A general in such a conflict might work to poison the social contract which powers Wikipedia, sewing mistrust, discontent and the presumption of malice within a community that thrives on trust, consensus-building and adherence to a common vision. Striking at the root of the social contract which is the whole of Wikipedia could possibly disrupt its internal networks and dissipate the human energy which drives the project.

Were we on the other side of the conflict, running a defensive strategy, we would seek to reinforce Wikipedia’s natural strength – the social agreement. The stronger the social agreement, the less effective any organized attack will be. A strong social agreement implies a depth of social resources which can be deployed to prevent or rapidly ameliorate damage.

Although this conflict between the Church of Scientology and Wikipedia may never explode into a full-blown conflict, at some point in the future, some other organization or institution will collide with Wikipedia, and battle lines will be drawn. The whole of this quarter of the 21st century looks like an accelerating series of run-ins between hierarchical organizations and adhocracies. What happens when the hierarchies find that their usual tools of war are entirely mismatched to their opponent?

II: War is Hell

Even the collision between friendly parties, when thus mismatched, can be devastating. Rasmus Klies Nielsen, a PhD student in Columbia’s Communications program, wrote an interesting study a few months ago in which he looked at “communication overload”, which he identifies as a persistent feature of online activism. Nielsen specifically studied the 2008 Democratic Primary campaign in New York, and learned that some of the best-practices of the Obama campaign failed utterly when they encountered an energized and empowered public.

The Obama campaign encouraged voters to communicate through its website, both with one another and with the campaign’s New York staff. Although New York had been written off by the campaign (Hilary Clinton was sure to win her home state), the state still housed many very strong and vocal Obama supporters (apocryphally, all from Manhattan’s Upper West Side). These supporters flooded into the Obama campaign website for New York, drowning out the campaign itself. As election day loomed, campaign staffers retreated to “older” communication techniques – that is, mobile phones – while Obama’s supporters continued the conversation through the website. A complete disconnection between campaign and supporters occurred, even though the parties had the same goals.

Political campaigns may be chaotic, but they are also very hierarchically structured. There is an orderly flow of power from top (candidate) to bottom (voter). Each has an assigned role. When that structure is short-circuited and replaced by an adhocracy, the instrumentality of the hierarchy overloads. We haven’t yet seen the hybrid beast which can function hierarchically yet interaction with an adhocracy. At this point when the two touch, the hierarchy simply shorts out.

Another example from the Obama general election campaign illustrates this tendency for hierarchies to short out when interacting with friendly adhocracies. Project Houdini was touted as a vast, distributed GOTV program which would allow tens of thousands of field workers to keep track of who had voted and who hadn’t. Project Houdini was among the most ambitious of the online efforts of the Obama campaign, and was thoroughly tested in the days leading up to the general election. But, once election day came, Project Houdini went down almost immediately under the volley of information coming in from every quadrant of the nation, from fieldworkers thoroughly empowered to gather and report GOTV data to the campaign. A patchwork backup plan allowed the campaign to tame the torrent of data, channeling it through field offices. But the great vision of the Obama campaign, to empower the individuals with the capability to gather and report GOTV data, came crashing down, because the system simply couldn’t handle the crush of the empowered field workers.

Both of these collisions happened in ‘friendly fire’ situations, where everyone’s eyes were set on achieving the same goal. But these two systems of organization are so foreign to one another that we still haven’t seen any successful attempt to span the chasm that separates them. Instead, we see collisions and failures. The political campaigns of the future must learn how to cross that gulf. While some may wish to turn the clock back to an earlier time when campaigns respected carefully-wrought hierarchies, the electorates of the 21st century, empowered in their own right, have already come to expect that their candidate’s campaigns will meet them in that empowerment. The next decade is going to be completely hellish for politicians and campaign workers of every party as new rules and systems are worked out. There are no successful examples – yet. But circumstances are about to force a search for solutions.

III: War is Peace

As governments release the vast amounts of data held and generated by them, communities of interest are rising up to work with that data. As these communities become more knowledgeable, more intelligent – hyperintelligent – via this exposure, this hyperintelligence will translate into action: hyperempowerment. This is all well and good so long as the aims of the state are the same as the aims of the community. A community of hyperempowered citizens can achieve lofty goals in partnership with the state. But even here, the hyperempowered community faces a mismatch with the mechanisms of the state. The adhocracy by which the community thrives has no easy way to match its own mechanisms with those of the state. Even with the best intentions, every time the two touch there is the risk of catastrophic collapse. The failures of Project Houdini will be repeated, and this might lead some to argue that the opening up itself was a mistake. In fact, these catastrophes are the first sign of success. Connection is being made.

In order to avoid catastrophe, the state – and any institution which attempts to treat with a hyperintelligence – must radically reform its own mechanisms of communication. Top-down hierarchies which order power precisely can not share power with hyperintelligence. The hierarchy must open itself to a more chaotic and fundamentally less structured relationship with the hyperintelligence it has helped to foster. This is the crux of the problem, asking the leopard to change its spots. Only in transformation can hierarchy find its way into a successful relationship with hyperintelligence. But can any hierarchy change without losing its essence? Can the state – or any institution – become more flexible, fluid and dynamic while maintaining its essential qualities?

And this is the good case, the happy outcome, where everyone is pulling in the same direction. What happens when aims differ, when some hyperintelligence for some reason decides that it is antithetical to the interests of an institution or a state? We’ve seen the beginnings of this in the weird, slow war between the Church of Scientology and ANONYMOUS, a shadowy organization which coordinates its operations through a wiki. In recent weeks ANONYMOUS has also taken on the Basidj paramilitaries in Iran, and China’s internet censors. ANONYMOUS pools its information, builds hyperintelligence, and translates that hyperintelligence into hyperempowerment. Of course, they don’t use these words. ANONYMOUS is simply a creature of its times, born in an era of hyperconnectivity.

It might be more profitable to ask what happens when some group, working the data supplied at Recovery.gov or Data.gov or you-name-it.gov, learns of something that they’re opposed to, then goes to work blocking the government’s activities. In some sense, this is good old-fashioned activism, but it is amplified by the technologies now at hand. That amplification could be seen as a threat by the state; such activism could even be labeled terrorism. Even when this activism is well-intentioned, the mismatch and collision between the power of the state and any hyperempowered polities means that such mistakes will be very easy to make.

We will need to engage in a close examination of the intersection between the state and the various hyperempowered actors which rising up over next few years. Fortunately, the Obama administration, in its drive to make government data more transparent and more accessible (and thereby more likely to generate hyperintelligence around it) has provided the perfect laboratory to watch these hyperintelligences as they emerge and spread their wings. Although communication’s PhD candidates undoubtedly will be watching and taking notes, public policy-makers also should closely observe everything that happens. Since the rules of the game are changing, observation is the first most necessary step toward a rational future. Examining the pushback caused by these newly emerging communities will give us our first workable snapshot of a political science for the 21st century.

The 21st century will continue to see the emergence of powerful and hyperempowered communities. Sometimes these will challenge hierarchical organizations, such as with Wikipedia and the Church of Scientology; sometimes they will work with hierarchical organizations, as with Project Houdini; and sometimes it will be very hard to tell what the intended outcomes are. In each case the hierarchy – be it a state or an institution – will have to adapt itself into a new power role, a new sharing of power. In the past, like paired with like: states shared power with states, institutions with institutions, hierarchies with hierarchies. We are leaving this comfortable and familiar time behind, headed into a world where actors of every shape and description find themselves sufficiently hyperempowered to challenge any hierarchy. Even when they seek to work with a state or institution, they present challenges. Peace is war. In either direction, the same paradox confronts us: power must surrender power, or be overwhelmed by it. Sharing power is not an ideal of some utopian future; it’s the ground truth of our hyperconnected world.

Everywhere

I.

Sydney looks very little different from the city of Gough Whitlam’s day. Although almost forty years have passed, we see most of the same concrete monstrosities at the Big End of town, the same terrace houses in Surry Hills and Paddington, the same mile-after-mile of brick dwellings in the outer suburbs. Sydney has grown a bit around the edges, bumping up against the natural frontiers of our national parks, but, for a time-traveler, most things would appear nearly exactly the same.

That said, the life of the city is completely different. This is not because a different generation of Australians, from all corners of the world, inhabit the city. Rather, the city has acquired a rich inner life, an interiority which, though invisible to the eye, has become entirely pervasive, and completely dominates our perceptions. We walk the streets of the city, but we swim through an invisible ether of information. Just a decade ago we might have been said to have jumped through puddles of data, hopping from one to another as a five year-old might in a summer rainstorm. But the levels have constantly risen, in a curious echo of global warming, until, today, we must swim hard to stay afloat.

The individuals in our present-day Sydney stride the streets with divided attention, one eye scanning the scene before them, and another almost invariably fiddling with a mobile phone: sending a text, returning a call, using the GPS satellites to locate an address. Where, four decades ago, we might have kept a wary eye on passers-by, today we focus our attentions into the palms of our hands, playing with our toys. The least significant of these toys are the stand-alone entertainment devices; the iPods and their ilk, which provide a continuous soundtrack for our lives, and which insulate us from the undesired interruptions of the city. These are pleasant, but unimportant.

The devices which allow us to peer into and sail the etheric sea of data which surrounds us, these are the important toys. It’s already become an accepted fact that a man leaves the house with three things in his possession: his wallet, his keys, and his mobile. I have a particular pat-down I practice as the door to my flat closes behind me, a ritual of reassurance that tells me that yes, I am truly ready for the world. This behavioral transformation was already well underway when I first visited Sydney in 1997, and learned, from my friends’ actions, that mobile phones acted as a social lubricant. Dates could be made, rescheduled, or broken on the fly, effortlessly, without the painful social costs associated with standing someone up.

This was not a unique moment; it was simply the first in an ever-increasing series of transformations of human behavior, as the social accelerator of continuous communication became a broadly-accepted feature of civilization. The transition to frictionless social intercourse was quickly followed by a series of innovations which removed much of the friction from business and government. As individuals we must work with institutions and bureaucracies, but we have more ways to reach into them – and they, into us – than ever before. Businesses, in particular, realized that they could achieve both productivity gains and cost savings by leveraging the new facilities of communication. This relationship between commerce and the consumer produced an accelerating set of feedbacks which translated the very physical world of commerce into an enormous virtual edifice, one which sought every possible advantage of virtualization, striving to reach its customers through every conceivable mechanism.

Now, as we head into the winter of 2008, we live in a world where a seemingly stable physical environment is entirely overlaid and overweighed by a virtual world of connection and communication. The physical world has, in large part, lost its significance. It’s not that we’ve turned away from the physical world, but rather, that the meaning of the physical world is now derived from our interactions within the virtual world. The conversation we have, between ourselves, and with the institutions which serve us, frame the world around us. A bank is no longer an imposing edifice with marble columns, but an EFTPOS swipe or a statement displayed in a web browser. The city is no longer streets and buildings, but flows of people and information, each invisibly connected through pervasive wireless networks.

It is already a wireless world. That battle was fought and won years ago; truly, before anyone knew the battle had been joined, it was effectively over. We are as wedded to this world as to the physical world – perhaps even more so. The frontlines of development no longer concern themselves with the deployment of wireless communications, but rather with their increasing utility.

II.

Utility has a value. How much is it worth to me to be able to tell a mate that I’m delayed in traffic and can’t make dinner on time? Is it worth a fifty-cent voice call, or a twenty-five cent text (which may go through several iterations, and, in the end, cost me more)? Clearly it is; we are willing to pay a steep price to keep our social relationships on an even keel. What about our business relationships? How much is it worth to be able to take a look at the sales brochure for a store before we enter it? How much is it worth to find it on a map, or get directions from where we are? How much is it worth to send an absolutely vital email to a business client?

These are the economics that have ruled the tariff structures of wireless communications, both here in Australia and in the rest of the world. Bandwidth, commonly thought of as a limited resource, must be paid for. Infrastructure must be paid for. Shareholders must receive a fair return on their investments. All of these points, while valid, do not tell the whole story. The tariff structure acts as a barrier to communication, a barrier which can only be crossed if the perceived value is greater than the costs incurred. In the situations outlined above, this is often the case, and is thus the basis for the wireless telelcomms industry. But there are other economics at work, and these economics dictate a revision to this monolithic ordering of business affairs.

Chris Anderson, the editor of WIRED magazine, has been writing a series of essays in preparation for the publication of his next book, Free: Why $0.00 is the Future of Business. In his first essay – published in WIRED magazine, of course – Anderson takes a look at Moore’s Law, which promises a two-fold decrease in transistor cost every eighteen months, a rule that’s proven continuously true since Intel co-founder Gordon Moore proposed it, back in 1965. Somewhere around 1973, Anderson notes, Carver Mead, the father of VLSI, realized that individual transistors were becoming so small and so cheap as to be essentially free. Yes, in aggregates of hundreds of millions, transistors cost a few tens of dollars. But at the level of single circuits, these transistors are free, and can be “wasted” to provide some additional functionality at essentially zero additional cost. When, toward the end of the 1970s, the semiconductor industry embraced Mead’s design methodology, the silicon revolution began in earnest, powered by ever-cheaper transistors that could, as far as the designer was concerned, be considered entirely expendable.

Google has followed a similar approach to profitability. Pouring hundreds of millions of dollars into a distributed, networked architecture which crawls and indexes the Web, Google provides its search engine for free, in the now-substantiated belief that something made freely available can still generate a very decent profit. Google designed its own, cheap computers, its own, cheap operating system, and fit these into its own, expensive data centers, linked together with relatively inexpensive bandwidth. Yahoo! and Microsoft – and Baidu and Facebook and MySpace – have followed similar paths to profitability. Make it free, and make money.

This seems counterintuitive, but herein is the difference between the physical and virtual worlds; the virtual world, insubstantial and pervasive, has its own economies of scale, which function very differently from the physical world. In the virtual world, the more a resource is shared, the more valuable it becomes, so ubiquity is the pathway to profitability.

We do not think of bandwidth as a virtual resource, one that can simply be burned. In Australia, we think of bandwidth as being an expensive and scarce resource. This is not true, and has never been particularly true. Over the time I’ve lived in this country (four and a half years) I’ve paid the same fixed amount for my internet bandwidth, yet today I have roughly six times the bandwidth, and seven times the download cap. Bandwidth is following the same curve as the transistor, because the cost of bandwidth is directly correlated to the cost of transistors.

Last year I upgraded to a 3G mobile handset, the Nokia N95, and immediately moved from GPRS speeds to HSDPA speeds – roughly 100x faster – but I am still spending the same amount for my mobile, on a monthly basis. I know that some Australian telcos see Vodafone’s tariff policy as sheer lunacy. But I reckon that Vodafone understands the economics of bandwidth. Vodafone understands that bandwidth is becoming free; the only way they can continue to benefit from my custom is if they continuously upgrade my service – just like my ISP.

Telco tariffs are predicated on the basic idea that spectrum is a limited resource. But spectrum is not a limited resource. Allocations are limited, yes, and licensed from the regulatory authorities for many millions of dollars a year. But spectrum itself is not in any wise limited. The 2.4 Ghz band is proof positive of this. Just that tiny slice of spectrum is responsible for more revenue than any other slice of spectrum, outside of the GSM and 3G bands. Why is this? Because the 2.4 Ghz band is unregulated, engineers and designers have had to teach their varied devices to play well with one another, even in hostile environments. I can use a Bluetooth headset right next to my WiFi-enabled MacBook, and never experience any problems, because these devices use spread-spectrum and spectrum-hopping to behave politely. My N95 can use WiFi and Bluetooth networking simultaneously – yet there’s never interference.

Unlicensed spectrum is not anarchy. It is an invitation to innovate. It is an open door to the creative engines of the economy. It is the most vital part of the entire wireless world, because it is the corner of the wireless world where bandwidth already is free.

III.

And so back to the city outside the convention center walls, crowded with four million people, each eagerly engaged in their own acts of communication. Yet these moments are bounded by an awareness of the costs of this communication. These tariffs act as a fundamental brake on the productivity of the Australian economy. They fetter the means of production. And so they must go.

I do not mean that we should nationalize the telcos – we’ve already been there – but rather, that we must engage in creating a new generation of untarriffed networks. The technology is already in place. We have cheap and durable mesh routers, such as the Open-Mesh and the Meraki, which can be dropped almost anywhere, powered by sun or by mains, and can create a network that spans nearly a quarter kilometer square. We can connect these access points to our wired networks, and share some small portion of our every-increasing bandwidth wealth with the public at large, so that no matter where they are in this city – or in this nation – they can access the wireless world. And we can secure these networks to prevent fraud and abuse.

Such systems already exist. In the past eight months, Meraki has given their $50 WiFi mesh routers to any San Franciscan willing to donate some of their ever-cheaper bandwidth to a freely available municipal network. When I started tracking the network, it had barely five thousand users. Today, it has over seventy thousand – that’s about one-tenth of the city. San Francisco is a city of hills and low buildings – it’s hard to get real reach from a wireless signal. In Sydney, Melbourne, Adelaide, Brisbane and Perth – which are all built on flats – a little signal goes a long, long way. From my flat in Surry Hills I can cover my entire neighborhood. If another of my neighbors decides to contribute, we can create a mesh which reaches further into my neighborhood, where it can link up with another volunteer, further in the neighborhood, and so on, and so on, until the entirety of my suburb is bathed in freely available wireless connectivity.

While this may sound like a noble idea, that is not the reason it is a good idea. Free wireless is a good idea because it enables an entirely new level of services, which would not, because of tariffs, make economic sense. This type of information has value – perhaps great value, to some – but no direct economic value. This is where the true strength of free wireless shows itself: it enables a broad participation in the electronic life of the city by all participants – individuals, businesses, and institutions – without the restraint of economic trade-offs.

This unlicensed participation has no form as yet, because we haven’t deployed the free wireless network beyond a few select spots in Australia’s cities. But, once the network has been deployed, some enterprising person will develop the “killer app” for this network, something so unexpected, yet so useful, that it immediately becomes apparent that the network is an incredibly valuable resource, one which will improve human connectivity, business productivity, and the delivery of services. Something that, once established, will be seen as an absolutely necessary feature in the life of the city.

Businessmen hate to deal in intangibles, or wild-eyed “science projects.” So instead, let me present you with a fait accompli: This is happening. We’re reaching a critical mass of Wifi devices in our dense urban cores. Translating these devices into nodes within city-spanning mesh networks requires only a simple software upgrade. It doesn’t require a hardware build-out. The transformation, when it comes, will happen suddenly and completely, and it will change the way we view the city.

The question then, is simple: are you going to wait for this day, or are you going to help it along? It could be slowed down, fettered by lawsuits and regulation. Or it could be accelerated into inevitability. We’re at a transition point now, between the tariffed networks we have lived with for the last decade, and the new, free networks, which are organically popping up in Australia and throughout the world. Both networks will co-exist; a free network actually increases the utility of a tariffed mobile network.

So, do you want to fight it? Or do you want to switch it on?

Understanding Gilmore’s Law: Telecoms Edition

OR,
How I Quit Worrying and Learned to be a Commodity

Introduction


“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:



From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

Going into Syndication

I.

Content. Everyone makes it. Everyone consumes it. If content is king, we are each the means of production. Every email, every blog post, every text message, all of these constitute production of content. In the earliest days of the web this was recognized explicitly; without a lot of people producing a lot of content, the web simply wouldn’t have come into being. Somewhere toward the end of 1995, this production formalized, and we saw the emergence of a professional class of web producers. This professional class asserts its authority over the audience on the basis of two undeniable strengths: first, it cultivates expertise; second, it maintains control over the mechanisms of distribution. In the early years of the Web, both of these strengths presented formidable barriers to entry. As we emerged from the amateur era of “pages about kitty-cats” into the branded web era of CNN.com, NYT.com, and AOL.com, the swarm of Internet users naturally gravitated to the high-quality information delivered through professional web sites. The more elite (and snobbish) of the early netizens decried this colonization of the electronic space by the mainstream media; they preferred the anarchic, imprecise and democratic community of newsgroups to the imperial aegis of Big Media.

In retrospect, both sides got it wrong. There was no replacement of anarchy for order; nor was there any centralization of attention around a suite of “portal” sites, though, for at least a decade it seemed precisely this was happening. Nevertheless, the swarm has a way of consistently surprising us, of finding its way out of any box drawn up around it. If, for a period of time, it suited the swarm to cozy up to the old and familiar, this was probably due more to habit than to any deep desire. When thrust into the hyper-connected realm of the Web, our natural first reaction is to seek signposts, handholds against the onrush of so much that clamors about its own significance. In cyberspace you can implicitly trust the BBC, but when it comes to The Smoking Gun or Disinformation, that trust must be earned. Still, once that trust has been won, there would be no going back. This is the essence of the process of media fragmentation. The engine that drives fragmentation is not increasing competition; it is increasing familiarization with the opportunities on offer.

We become familiar with online resources through “the Three Fs”. We find things, we filter them, we forward them along. Social networks evolve the media consumption patterns which suit themselves best; this is often not highly correlated with the content available from mainstream outlets. Over time, social networks tend to favor the obscure over the quotidian, as the obscure is the realm of the cognoscenti. This trend means that this fragmentation is both inevitable and bound to accelerate.

Fragmentation spreads the burden of expertise onto a swarm of nanoexperts. Anyone who is passionate, intelligent, and willing to make the attention investment to master the arcana of a particular area of inquiry can transform themselves into a nanoexpert. When a nanoexpert plugs into a social network that values this expertise (or is driven toward nanoexpertiese in order to raise their standing within an existing social network), this investment is rewarded, and the connection between nanoexpert and network is strongly reinforced. The nanoexpert becomes “structurally coupled” with the social network – for as long as they maintain that expertise against all competitors. This transformation is happening countless times each day, across the entire taxonomy of human expertise. This is the engine which has deprived the mainstream media of their position of authority.

While the net gave every appearance of centralization, it never allowed for a monopoly on distribution. That house was always built on sand. But the bastion of expertise, this took longer to disintegrate. Yet it has, buffeted by wave after wave of nanoexperts. With the rise of the nanoexpert, mainstream media have lost all of their “natural” advantages, yet they still have considerable economic, political and popular clout. We must examine how they could translate this evanescent power into something which can survive the transition to world of nanoexperts.

II.

While expertise has become a diffuse quality, located throughout the cloud of networked intelligence, the search for information has remained essentially unchanged for the past decade. Nearly everyone goes to Google (or a Google equivalent) as a first stop on a search for information. Google uses swarm intelligence to determine the “trust” value of an information source: the most “trusted” sites show up as the top hits on Google’s Page Rank. Thus, even though knowledge and understanding have become more widespread, the path toward them grows ever more concentrated. I still go to the New York Times for international news reporting, and the Sydney Morning Herald for local news. Why? These sources are familiar to me. I know what I’m going to get. That means a lot, because as the number of possible sources reaches toward infinity, I haven’t the time or the inclination to search out every possible source for news. I have come to trust the brand. In an era of infinite choice, a brand commands attention. Yet brands are being constantly eroded by the rise of the nanoexpert; the nanoexpert is persuaded by their own sensibility, not subject to the lure of a well-known brand. Although the brand may represent a powerful presence in the contemporary media environment, there is very little reason to believe this will be true a decade or even five years hence.

For this reason, branded media entities need to make an accommodation with the army of nanoexperts. They have no choice but to sue for peace. If these warring parties had nothing to offer one another, this would be a pointless enterprise. But each side has something impressive to offer up in a truce: the branded entities have readers, and the nanoexperts are constantly finding, filtering and forwarding things to be read. This would seem to be a perfect match, but for one paramount issue: editorial control. A branded media outlet asserts (with reason) that the editorial controls developed over a period of years (or, in the case of the Sydney Morning Herald, centuries) form the basis of a trust relationship with its audience. To disrupt or abandon those controls might do more than dilute the brand – they could quickly destroy it. No matter how authoritative a nanoexpert might be, all nanoexpert contributions represent an assault upon editorial control, because these works have been created outside of the systems of creative production which ensure a consistent, branded product. This is the major obstacle that must be overcome before nanoexperts and branding media can work together harmoniously.

If branded media refuse to accept the ascendancy of nanoexperts, they will find themselves entirely eroded by them. This argument represents the “nuclear option”, the put-the-fear-of-God-in-you representation of facts. It might seem completely reasonable to a nanoexpert, but appears entirely suspect to the branded media, seeing only increasing commercial concentration, not disintegration. For the most part, nanoexperts function outside systems of commerce; their currency is social standing. Nanoexpert economies of value are invisible to commercial entities; but that does not mean they don’t exist. If we convert to a currency of attention – again, considered highly suspect by branded media – we can represent the situation even more clearly: more and more of the audience’s attentions are absorbed by nanoexpert content. (This is particularly true of audiences under 25 years old, who have grown to maturity in the era of the Web.)

The point can not be made more plainly, nor would it do any good to soften the blow: this transition to nanoexpertise is inexorable – this is the ad-hoc behavior of the swarm of internet users. There’s only one question of any relevance: can this ad-hoc behavior be formalized? Can the systems of production of the branded media adapt themselves to an era of “peer production” by an army of nanoexperts? If branded media refuse to formalize these systems of peer production, the peer production communities will do so – and, in fact, many already have. Sites such as Slashdot, Boing Boing, and Federated Media Publishing have grown up around the idea that the nanoexpert community has more to offer microaudiences than any branded media outlet. Each of these sites gets millions of visitors, and while they may not match the hundreds of millions of visitors to the major media portals, what they lack in volume they make up for in their multiplicity; these are successful models, and they are being copied. The systems which support them are being replicated. The means of fragmentation are multiplying beyond any possibility of control.

III.

A branded media outlet can be thought of as a network of contributors, editors and publishers, organized around the goal of gaining and maintaining audience attention. The first step toward an incorporation of peer production into this network is simply to open the gates of contribution to the army of nanoexperts. However, just because the gates to the city are open does not mean the army will wander in. They must be drawn in, seduced by something on offer. As commercial entities, branded media can offer to translate the coin of attention into real currency. This is already their function, so they will need to make no change to their business models to accommodate this new set of production relationships.

In the era of networks, joining one network to another is as simple as establishing the appropriate connections and reinforcing these connections by an exchange of value which weights their connections appropriately. Content flows into the brand, while currency flows toward the nanoexperts. This transition is simple enough, once editorial concerns have been satisfied. The issues of editorial control are not trivial, nor should they be sublimated in the search for business opportunities; business have built their brand around an editorial voice, and should seek only to associate with those nanoexperts who understand and are responsive to that voice. Both sides will need to be flexible; the editorial voice must become broader without disintegrating into a common yowl, while the nanoexperts must put aside the preciousness which they have cultivated in search of their expertise. Both parties surrender something they consider innate in order to benefit from the new arrangement: that’s the real nature of this truce. It may be that some are unwilling to accommodate this new state of affairs: for the branded media, it means the death of a thousand cuts; for the nanoexpert it means they will remain confined to communities where they have immense status, but little else to show for it. In both cases, they will face the competition of these hybrid entities, and, against them neither group can hope to triumph. After a settling-out period, these hybrid beasts, drawing their DNA from the best of both worlds, will own the day.

What does this hybrid organization deliver? At the moment, branded media deliver a broad range of content to a broad audience, while nanoexperts deliver highly focused content to millions of microaudiences. How do these two pieces fit together? One of the “natural” advantages of branded media organizations springs from a decades-long investment in IT infrastructure, which has historically been used to distribute information to mass audiences. Yet, surprisingly, branded media organizations know very little about the individual members of their audience. This is precisely the inverse of the situation with the nanoexpert, who knows an enormous amount about the needs and tastes of the microaudience – that is, the social networks served by their expertise. Thus, there needs to be another form of information exchange between the branded media and the nanoexpert; it isn’t just the content which needs to be syndicated through the branded outlet, but the microaudiences themselves. This is not audience aggregation, but rather, an exploration in depth of the needs of each particular audience member. From this, working in concert, the army of nanoexperts and the branded media outlet can develop tools to deliver depth content to each audience member.

This methodology favors process over product; the relation between nanoexpert, branded media, and audience must necessarily co-evolve, working toward a harmony where each is providing depth information in order to improve the capabilities of the whole. (This is the essence of a network.) Audience members will assume a creative role in the creation of a “feed” which serves just themselves, and, in this sense, each audience member is a nanoexpert – expert in their own tastes.

The advantages of such a system, when put into operation, make it both possible and relatively easy to deliver commercial information of such a highly meaningful nature that it can no longer be called “advertising” in any classic sense of the word, but rather, will be considered a string of “opportunities.” These might include job offers, or investment opportunities, or experiences (travel & education), or the presentation of products. This is Google’s Ad Words refined to the utmost degree, and can only exist if all three parties to this venture – nanoexpert, branded media, and audience members – have fully invested the network with information that helps the network refine and deliver just what’s needed, just when it’s wanted. The revenue generated by a successful integration of commerce with this new model of syndication will more than fuel its efforts.

When successfully implemented, such a methodology would produce an enviable, and likely unassailable financial model, because we’re no longer talking about “reaching an audience”; instead, this hybrid media business is involved in millions of individual conversations, each of which evolves toward its own perfection. Individuals imbedded in this network – at any point in this network – would find it difficult to leave it, or even resist it. This is more than the daily news, better than the best newspaper or magazine ever published; it is individual, and personal, yet networked and global. This is the emerging model for factual publishing.