Understanding Gilmore’s Law: Telecoms Edition

OR,
How I Quit Worrying and Learned to be a Commodity

Introduction


“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:



From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

YouTube’s New Interface: Appetite for Destruction?

For those of you who may have noticed the new embedded player that YouTube has been gradually rolling out over the last two weeks — have you noticed anything missing? Something so vital, so essential to what YouTube is, that you’ve searched around their new interface looking for it?

I’m talking about the “Share” button. The one feature that made YouTube a star. The one, absolutely indispensable, must-have interface affordance.

And now, as near as I can tell, it seems to be gone. Sure, you can embed, and you can get the URL and copy it to the clipboard, but none of that – none of it – has any of the power of just tapping on the “Share” button, selecting a few friends, and forwarding the video along.

It’s as if – I can only imagine – they have purposely set out to destroy their business. The entire reason that YouTube exists is because they made it so incredibly easy to find, filter and forward videos along from friend to friend. Anything, however small – and this isn’t small, it’s huge – that gets in the way of sharing will slow YouTube’s growth. Perhaps even ruin it.

This can’t be happening. They can’t have missed this. It’s too important, too significant, too central to what YouTube is.

And yet, it’s gone. No more sharing.

Rearranging the Deck Chairs

I. Everything Must Go!

It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.

The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.

On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.

II. The Vanilla Republic

It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.

When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.

In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.

This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.

III. My Left Eyeball

Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.

Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.

While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.

I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.

Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.

The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.

One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.

Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.

Why Copyright Doesn’t Matter

 

If you overvalue possessions, people begin to steal.– Lao Tzu, Tao Te Ching
IAlthough the New York Times found the new film Alternative Freedom a sloppy, disjointed, jingoistic mess, the movie does break new ground, highlighing the growing threat to public expression posed by restrictive copyright laws and digital rights management technologies. Supporting the “copyfight” thesis – that copyright law is slowly strangling the public’s ability to sample, remix and redistribute the ideas sold to them by entertainment companies – Alternative Freedom ventures beyond these familiar tropes: as video game systems, mobile phones and even printer toner cartridges become ever-more restrictive in the way they operate, we’re being sold devices which dictate their own terms of use. Any deviation from that usage is, in effect, a violation of copyright law. With appropriate legal penalties.Coincidentally, this week the US Congress began to deliberate strong and almost draconian extensions to the nation’s copyright laws, adding odious criminal penalties for what have – until now – been civil violations. Large-scale, commercial violators of copyright have always been criminals; now even the casual user could become a felon for any redistribution of content under copyright. As peer-to-peer filesharing networks grow ever broader in scope, become ever more difficult to detect, and ever harder to disrupt and destroy, the pressure builds. In essence, this is the last legal gasp of the entertainment industry to maintain control over the distribution of their productions.I have previously discussed the futility of “economic censorship” – which this proposed law before Congress equates to – and I can see nothing in these new laws which will slow the inexorable slide to an era where any media distributed anywhere on the planet becomes instantly available everywhere on the planet, to everyone. This is the essence of “hyperdistribution,” a recently-discovered, newly-emergent quality of our communications networks. You can’t make a network that won’t hyperdistribute content throughout its span – or rather, if you did, it wouldn’t look anything like the networks we use today. It seems unlikely that we would suddenly replace our entire global network infrastructure with something that would give us significantly less capability. Yet this must happen, if the long march to hyperdistribution is to be stopped.IIThis is a war for eyeballs and audiences. An entertainment producer spends significant time and money carefully crafting content for a mass audience, expecting that audience to pay for the privilege of enjoying the production. This is possible only insofar as access to the content can be absolutely restricted. If the producer only makes physical prints of a film, and only shows it in a theatre where everyone has been thoroughly searched for any sort of recording device (these days, that list would include both mobile phones and iPods), they might be able to restrict piracy. But only if there are no digital intermediates of the film, no screeners for reviewers mailed out on DVDs, no digital print for projection in the latest whiz-bang movie theatres. As soon as there is any digital representation of the production, copies of it will begin to multiply. It’s in the nature of the bits to generate more and more copies of themselves. These bits eventually make their way onto the network, and hyperdistribution begins.There is, in this evaluation, an assumption that this content has value to an audience. Many films are made each year – in Hollywood, Bollywood, Hong Kong, and throughout the world – yet, most of the time, people don’t care to see them. Films are big, complex, and frequently flawed; there is no such thing as a perfect film, and, more often than not, a film’s flaws outweigh its strengths, so the film fails. This wasn’t an issue before the advent of television – before 1947, film was the only way to enjoy the moving image. Over the last sixty years, the film industry has learned how to accommodate television – with cable and free-to-air broadcasts of their films, and, most profitably, with the huge industry created by the VCR and the DVD. Even so, in the era of the VCR viewers had perhaps five or six channels of broadcast television to choose from. When the DVD was introduced, viewers had perhaps fifty or sixty channels to watch – more substantial, but still nothing to be entirely worried about. Now the number of potential viewing choices is essentially infinite. In a burst of exponential growth, the video sharing site YouTube is about to surpass CNN in web traffic, and in just one week went from 35 million videos viewed to over 40 million. That kind of growth is clearly unsustainable, but it’s also just as clearly indicative that YouTube is becoming a foundation web service, as significant as Google or Wikipedia. And this is why copyright doesn’t matter.IIIIt’s frequently noted that much of the content up on YouTube is presented in violation of someone else’s copyright. It might be little snippets from South Park, The Daily Show, or Saturday Night Live. The media megacorporations who control those copyrights are constantly in contact with YouTube, asking them to remove this content as quickly as it appears – and YouTube is happy to oblige them. But YouTube is subject to “swarm effects,” so as soon as something is removed, someone else, from somewhere else, posts it again. Anything that is popular has a life of its own – outside of its creator’s control – and YouTube has become the focal point to express this vitality.At the moment, many of the popular videos on YouTube fall into this category of content-in-violation-of-copyright. But not all of them. There’s plenty on YouTube which has been posted by people who want to share their work with others. A lot of this is instructional, informational, or just plain odd. It’s outside the mainstream, was never meant to be mainstream, and yet, because it’s up there, and because so many people are looking to YouTube for a moment’s diversion or enlightenment, it tends, over time, to find its audience. Once something has found just one member of its audience, it’s quickly shared throughout that person’s social network, and rapidly reaches nearly the entirety of that audience. That’s the find-filter-forward operation of a social network in an era of hyperdistribution and microaudiences. YouTube is enabling it. That’s why YouTube has gotten so popular, so quickly: it’s filling an essential need for the microaudience.Is there a place for professionally-produced content in an age of social networks and microaudiences? This is the big question, the question that no one can answer, because the answer can only emerge over time. Attention is a zero-sum: if I’m watching this video on YouTube, I’m not watching that TV show or movie. If I’m thoroughly caught up in the five YouTube links I get sent each day – which will quickly become fifty, then five hundred – how can I find any time to watch the next Hollywood special effects extravaganza? And why would I want to? It’s not what my friends are watching: they’ve sent me links to what they’re watching – and that’s on YouTube.So go ahead, Congress: kill the entertainment industry by doing their bidding. Let them lock their content up so completely that its utility – with respect to the network – approaches zero. If people can’t find-filter-forward content, it won’t exist for them. Lock something up, and it becomes less and less important, until no one cares about it at all. People are increasingly concerned with the media they can share freely, and this points to a future where the amateur trumps the professional, because the amateur understands the first economic principle of hyperdistribution: the more something is shared, the more valuable it becomes.

In Medium Rez

I

Although Apple introduced its Video iPod at the end of 2005, this is the year when video begins to take off. Everywhere. The sheer profusion of devices which can play video – from iPods to desktop and laptop computers to Sony’s Playstation Portable, the Nintendo DS, and nearly all current-generation mobile phones – means that people will be watching more video, in more places, than ever before. You may not want to watch that episode of “Desperate Housewives” on your iPod – unless you happened to be tied up last Monday evening, and forgot to program your VCR. Then you’ll be glad you can. Sure, the picture is small and grainy, the sound’s a bit tinny, and your arms will get tired holding that screen in front of your face for an hour, but these drawbacks mean nothing to a true fan. And the true fans will lead this revolution.

We’re growing comfortable with the idea that screens are everywhere, that we can – in the time it takes to ride the train to work – get caught up on our favorite stories, the last World Cup match, and the news of the world. A generation ago it seemed odd to see someone in public wearing earphones; today it’s a matter of course. This afternoon it might seem odd to see someone staring into their mobile phone; tomorrow it will seem perfectly normal.

II

Now that video is everywhere, it won’t be long until the business of television moves online. Already, Apple has sold close to ten million episodes of television series like “Lost” and “The Office”. Google wants to sell you episodes of the original “Star Trek”, “The Brady Bunch” and “CSI”. For television producers it’s a win-win; they’ve already sold the episodes to broadcast networks – generally for a bit less than they cost to make – so the online sales are extra and vital dollars to cover the gap between loss and profit.

Today only a few of the hundreds of series shown in the US, UK and Australia are available for sale online. By the end of this year, most of them will be. Will the broadcast networks like this? Yes and no. It deprives them of some of the power they hold over the audience – to gather them at one place and time, eyeballs for advertisers – but it also creates new audiences: people see an episode online, and decide to tune in for the next one. That’s something we’ve already seen – “The Office”, for example, spiked upward in broadcast ratings after it was offered online. This year, there’s likely to be another breakout television hit – a new “Lost” – which starts its life online.

III

Once video is everywhere, once all our favorite television shows are available online for download, we’ll learn something else: there’s a lot more out there than just those shows produced for broadcast. On sites like Google Video and YouTube, you can already download tens of thousands of short- and full-length television programs. Some of them are woefully amateur productions, the kind that make you cringe in horror, but others – and there are more and more of these – are as funny and dramatic as anything you might see on broadcast television. Think TropFest – but a thousand times bigger.

Once we get used to the idea that television is something they can download, we’ll find ourselves drawn to these other, more unusual offerings. Most of this fare isn’t ready for prime-time. Much of it is only meant for a tight circle of friends and aficionados. But some of it will break through, and get audiences in the millions. It’s already happened a few times in the last year; this year it will become so common that, by the end of 2006, we’ll think nothing of it at all. This thought scares both the broadcast networks and the commercial TV producers. After all, if we’re spending our time watching something created by four kids in Goulburn, that’s time we’re not watching commercially-produced entertainment. And how do the networks compete with that?

IV

This fundamental transformation in how we find and watch entertainment isn’t confined to video. It’s happening to all other media, simultaneously. More people listen to the podcasts of Radio National than listen to the live broadcast; more people read the Sydney Morning Herald online than read the print edition. And these are just the professional offerings. As with television, each of these media are facing a rising sea of competition – from amateurs. Apple offers tens of thousands of podcasts through its iTunes Music Store – including Radio National – on just about any topic under the sun, from the mundane to the truly bizarre. You can get “feeds” of news from Fairfax – headlines and links to online versions of the stories – but you can also get that any of several thousand news-oriented blogs. Click a few buttons and the news is automatically downloaded to your computer, every half hour.

As it gets easier and easier for us to choose exactly what we want to watch, hear and read, the commercial and national broadcasters find themselves facing the “death of a thousand cuts.” Every pair of ears listening to a podcast is an audience member who won’t show up in the ratings. Every subscriber to an “amateur” news feed is a subscriber lost to a newspaper. And this trend is just beginning. In another decade, we’ll wonder how we lived without all this choice.

V

Choice is a beautiful thing. We define ourselves by the choices we make: what we do, who we know, what we fill our leisure time with. Now that our media is everywhere, available from everyone, any hour of the day or night, we’re going to find ourselves confronted by an unexpected problem: rather than trying to decide what to watch on five terrestrial broadcast channels – or fifty cable channels – we’ll have to pick from an ocean of a million different programs; even if most of them aren’t all that appealing, at least a few thousand will be, at any point in time.

That kind of choice will make us all a little bit crazy, because we’ll always be wondering if, just now, something better isn’t out there, waiting for us to download it. Like the channel surfer who sits, remote in hand, flipping through the channels, hoping for something to catch his eye, we’re going to be flipping through hundreds of thousands and then millions of choices of things to watch, hear and read. We’re going to be drowning in possibilities. And the pressure – to keep up, to be informed, to be on the tip – is about to create the most savvy generation of media consumers the world has ever seen.

We’re drowning in choice, but, because of that, we’ll figure out how to share what we know about what’s good. We already receive lots of email from friends and family with links to the best things they’ve found online. That’s going to continue, and accelerate; our circles of friends are becoming our TV programmers, our radio DJs, our newspaper editors, and we’ll return the favor. The media of the 21st century are created by us, edited by us, and broadcast by us. That’s a deep change, and a permanent one.

Going into Syndication

I.

Content. Everyone makes it. Everyone consumes it. If content is king, we are each the means of production. Every email, every blog post, every text message, all of these constitute production of content. In the earliest days of the web this was recognized explicitly; without a lot of people producing a lot of content, the web simply wouldn’t have come into being. Somewhere toward the end of 1995, this production formalized, and we saw the emergence of a professional class of web producers. This professional class asserts its authority over the audience on the basis of two undeniable strengths: first, it cultivates expertise; second, it maintains control over the mechanisms of distribution. In the early years of the Web, both of these strengths presented formidable barriers to entry. As we emerged from the amateur era of “pages about kitty-cats” into the branded web era of CNN.com, NYT.com, and AOL.com, the swarm of Internet users naturally gravitated to the high-quality information delivered through professional web sites. The more elite (and snobbish) of the early netizens decried this colonization of the electronic space by the mainstream media; they preferred the anarchic, imprecise and democratic community of newsgroups to the imperial aegis of Big Media.

In retrospect, both sides got it wrong. There was no replacement of anarchy for order; nor was there any centralization of attention around a suite of “portal” sites, though, for at least a decade it seemed precisely this was happening. Nevertheless, the swarm has a way of consistently surprising us, of finding its way out of any box drawn up around it. If, for a period of time, it suited the swarm to cozy up to the old and familiar, this was probably due more to habit than to any deep desire. When thrust into the hyper-connected realm of the Web, our natural first reaction is to seek signposts, handholds against the onrush of so much that clamors about its own significance. In cyberspace you can implicitly trust the BBC, but when it comes to The Smoking Gun or Disinformation, that trust must be earned. Still, once that trust has been won, there would be no going back. This is the essence of the process of media fragmentation. The engine that drives fragmentation is not increasing competition; it is increasing familiarization with the opportunities on offer.

We become familiar with online resources through “the Three Fs”. We find things, we filter them, we forward them along. Social networks evolve the media consumption patterns which suit themselves best; this is often not highly correlated with the content available from mainstream outlets. Over time, social networks tend to favor the obscure over the quotidian, as the obscure is the realm of the cognoscenti. This trend means that this fragmentation is both inevitable and bound to accelerate.

Fragmentation spreads the burden of expertise onto a swarm of nanoexperts. Anyone who is passionate, intelligent, and willing to make the attention investment to master the arcana of a particular area of inquiry can transform themselves into a nanoexpert. When a nanoexpert plugs into a social network that values this expertise (or is driven toward nanoexpertiese in order to raise their standing within an existing social network), this investment is rewarded, and the connection between nanoexpert and network is strongly reinforced. The nanoexpert becomes “structurally coupled” with the social network – for as long as they maintain that expertise against all competitors. This transformation is happening countless times each day, across the entire taxonomy of human expertise. This is the engine which has deprived the mainstream media of their position of authority.

While the net gave every appearance of centralization, it never allowed for a monopoly on distribution. That house was always built on sand. But the bastion of expertise, this took longer to disintegrate. Yet it has, buffeted by wave after wave of nanoexperts. With the rise of the nanoexpert, mainstream media have lost all of their “natural” advantages, yet they still have considerable economic, political and popular clout. We must examine how they could translate this evanescent power into something which can survive the transition to world of nanoexperts.

II.

While expertise has become a diffuse quality, located throughout the cloud of networked intelligence, the search for information has remained essentially unchanged for the past decade. Nearly everyone goes to Google (or a Google equivalent) as a first stop on a search for information. Google uses swarm intelligence to determine the “trust” value of an information source: the most “trusted” sites show up as the top hits on Google’s Page Rank. Thus, even though knowledge and understanding have become more widespread, the path toward them grows ever more concentrated. I still go to the New York Times for international news reporting, and the Sydney Morning Herald for local news. Why? These sources are familiar to me. I know what I’m going to get. That means a lot, because as the number of possible sources reaches toward infinity, I haven’t the time or the inclination to search out every possible source for news. I have come to trust the brand. In an era of infinite choice, a brand commands attention. Yet brands are being constantly eroded by the rise of the nanoexpert; the nanoexpert is persuaded by their own sensibility, not subject to the lure of a well-known brand. Although the brand may represent a powerful presence in the contemporary media environment, there is very little reason to believe this will be true a decade or even five years hence.

For this reason, branded media entities need to make an accommodation with the army of nanoexperts. They have no choice but to sue for peace. If these warring parties had nothing to offer one another, this would be a pointless enterprise. But each side has something impressive to offer up in a truce: the branded entities have readers, and the nanoexperts are constantly finding, filtering and forwarding things to be read. This would seem to be a perfect match, but for one paramount issue: editorial control. A branded media outlet asserts (with reason) that the editorial controls developed over a period of years (or, in the case of the Sydney Morning Herald, centuries) form the basis of a trust relationship with its audience. To disrupt or abandon those controls might do more than dilute the brand – they could quickly destroy it. No matter how authoritative a nanoexpert might be, all nanoexpert contributions represent an assault upon editorial control, because these works have been created outside of the systems of creative production which ensure a consistent, branded product. This is the major obstacle that must be overcome before nanoexperts and branding media can work together harmoniously.

If branded media refuse to accept the ascendancy of nanoexperts, they will find themselves entirely eroded by them. This argument represents the “nuclear option”, the put-the-fear-of-God-in-you representation of facts. It might seem completely reasonable to a nanoexpert, but appears entirely suspect to the branded media, seeing only increasing commercial concentration, not disintegration. For the most part, nanoexperts function outside systems of commerce; their currency is social standing. Nanoexpert economies of value are invisible to commercial entities; but that does not mean they don’t exist. If we convert to a currency of attention – again, considered highly suspect by branded media – we can represent the situation even more clearly: more and more of the audience’s attentions are absorbed by nanoexpert content. (This is particularly true of audiences under 25 years old, who have grown to maturity in the era of the Web.)

The point can not be made more plainly, nor would it do any good to soften the blow: this transition to nanoexpertise is inexorable – this is the ad-hoc behavior of the swarm of internet users. There’s only one question of any relevance: can this ad-hoc behavior be formalized? Can the systems of production of the branded media adapt themselves to an era of “peer production” by an army of nanoexperts? If branded media refuse to formalize these systems of peer production, the peer production communities will do so – and, in fact, many already have. Sites such as Slashdot, Boing Boing, and Federated Media Publishing have grown up around the idea that the nanoexpert community has more to offer microaudiences than any branded media outlet. Each of these sites gets millions of visitors, and while they may not match the hundreds of millions of visitors to the major media portals, what they lack in volume they make up for in their multiplicity; these are successful models, and they are being copied. The systems which support them are being replicated. The means of fragmentation are multiplying beyond any possibility of control.

III.

A branded media outlet can be thought of as a network of contributors, editors and publishers, organized around the goal of gaining and maintaining audience attention. The first step toward an incorporation of peer production into this network is simply to open the gates of contribution to the army of nanoexperts. However, just because the gates to the city are open does not mean the army will wander in. They must be drawn in, seduced by something on offer. As commercial entities, branded media can offer to translate the coin of attention into real currency. This is already their function, so they will need to make no change to their business models to accommodate this new set of production relationships.

In the era of networks, joining one network to another is as simple as establishing the appropriate connections and reinforcing these connections by an exchange of value which weights their connections appropriately. Content flows into the brand, while currency flows toward the nanoexperts. This transition is simple enough, once editorial concerns have been satisfied. The issues of editorial control are not trivial, nor should they be sublimated in the search for business opportunities; business have built their brand around an editorial voice, and should seek only to associate with those nanoexperts who understand and are responsive to that voice. Both sides will need to be flexible; the editorial voice must become broader without disintegrating into a common yowl, while the nanoexperts must put aside the preciousness which they have cultivated in search of their expertise. Both parties surrender something they consider innate in order to benefit from the new arrangement: that’s the real nature of this truce. It may be that some are unwilling to accommodate this new state of affairs: for the branded media, it means the death of a thousand cuts; for the nanoexpert it means they will remain confined to communities where they have immense status, but little else to show for it. In both cases, they will face the competition of these hybrid entities, and, against them neither group can hope to triumph. After a settling-out period, these hybrid beasts, drawing their DNA from the best of both worlds, will own the day.

What does this hybrid organization deliver? At the moment, branded media deliver a broad range of content to a broad audience, while nanoexperts deliver highly focused content to millions of microaudiences. How do these two pieces fit together? One of the “natural” advantages of branded media organizations springs from a decades-long investment in IT infrastructure, which has historically been used to distribute information to mass audiences. Yet, surprisingly, branded media organizations know very little about the individual members of their audience. This is precisely the inverse of the situation with the nanoexpert, who knows an enormous amount about the needs and tastes of the microaudience – that is, the social networks served by their expertise. Thus, there needs to be another form of information exchange between the branded media and the nanoexpert; it isn’t just the content which needs to be syndicated through the branded outlet, but the microaudiences themselves. This is not audience aggregation, but rather, an exploration in depth of the needs of each particular audience member. From this, working in concert, the army of nanoexperts and the branded media outlet can develop tools to deliver depth content to each audience member.

This methodology favors process over product; the relation between nanoexpert, branded media, and audience must necessarily co-evolve, working toward a harmony where each is providing depth information in order to improve the capabilities of the whole. (This is the essence of a network.) Audience members will assume a creative role in the creation of a “feed” which serves just themselves, and, in this sense, each audience member is a nanoexpert – expert in their own tastes.

The advantages of such a system, when put into operation, make it both possible and relatively easy to deliver commercial information of such a highly meaningful nature that it can no longer be called “advertising” in any classic sense of the word, but rather, will be considered a string of “opportunities.” These might include job offers, or investment opportunities, or experiences (travel & education), or the presentation of products. This is Google’s Ad Words refined to the utmost degree, and can only exist if all three parties to this venture – nanoexpert, branded media, and audience members – have fully invested the network with information that helps the network refine and deliver just what’s needed, just when it’s wanted. The revenue generated by a successful integration of commerce with this new model of syndication will more than fuel its efforts.

When successfully implemented, such a methodology would produce an enviable, and likely unassailable financial model, because we’re no longer talking about “reaching an audience”; instead, this hybrid media business is involved in millions of individual conversations, each of which evolves toward its own perfection. Individuals imbedded in this network – at any point in this network – would find it difficult to leave it, or even resist it. This is more than the daily news, better than the best newspaper or magazine ever published; it is individual, and personal, yet networked and global. This is the emerging model for factual publishing.