Understanding Gilmore’s Law: Telecoms Edition

OR,
How I Quit Worrying and Learned to be a Commodity

Introduction


“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:



From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

Rearranging the Deck Chairs

I. Everything Must Go!

It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.

The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.

On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.

II. The Vanilla Republic

It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.

When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.

In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.

This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.

III. My Left Eyeball

Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.

Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.

While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.

I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.

Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.

The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.

One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.

Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.

Why Copyright Doesn’t Matter

 

If you overvalue possessions, people begin to steal.– Lao Tzu, Tao Te Ching
IAlthough the New York Times found the new film Alternative Freedom a sloppy, disjointed, jingoistic mess, the movie does break new ground, highlighing the growing threat to public expression posed by restrictive copyright laws and digital rights management technologies. Supporting the “copyfight” thesis – that copyright law is slowly strangling the public’s ability to sample, remix and redistribute the ideas sold to them by entertainment companies – Alternative Freedom ventures beyond these familiar tropes: as video game systems, mobile phones and even printer toner cartridges become ever-more restrictive in the way they operate, we’re being sold devices which dictate their own terms of use. Any deviation from that usage is, in effect, a violation of copyright law. With appropriate legal penalties.Coincidentally, this week the US Congress began to deliberate strong and almost draconian extensions to the nation’s copyright laws, adding odious criminal penalties for what have – until now – been civil violations. Large-scale, commercial violators of copyright have always been criminals; now even the casual user could become a felon for any redistribution of content under copyright. As peer-to-peer filesharing networks grow ever broader in scope, become ever more difficult to detect, and ever harder to disrupt and destroy, the pressure builds. In essence, this is the last legal gasp of the entertainment industry to maintain control over the distribution of their productions.I have previously discussed the futility of “economic censorship” – which this proposed law before Congress equates to – and I can see nothing in these new laws which will slow the inexorable slide to an era where any media distributed anywhere on the planet becomes instantly available everywhere on the planet, to everyone. This is the essence of “hyperdistribution,” a recently-discovered, newly-emergent quality of our communications networks. You can’t make a network that won’t hyperdistribute content throughout its span – or rather, if you did, it wouldn’t look anything like the networks we use today. It seems unlikely that we would suddenly replace our entire global network infrastructure with something that would give us significantly less capability. Yet this must happen, if the long march to hyperdistribution is to be stopped.IIThis is a war for eyeballs and audiences. An entertainment producer spends significant time and money carefully crafting content for a mass audience, expecting that audience to pay for the privilege of enjoying the production. This is possible only insofar as access to the content can be absolutely restricted. If the producer only makes physical prints of a film, and only shows it in a theatre where everyone has been thoroughly searched for any sort of recording device (these days, that list would include both mobile phones and iPods), they might be able to restrict piracy. But only if there are no digital intermediates of the film, no screeners for reviewers mailed out on DVDs, no digital print for projection in the latest whiz-bang movie theatres. As soon as there is any digital representation of the production, copies of it will begin to multiply. It’s in the nature of the bits to generate more and more copies of themselves. These bits eventually make their way onto the network, and hyperdistribution begins.There is, in this evaluation, an assumption that this content has value to an audience. Many films are made each year – in Hollywood, Bollywood, Hong Kong, and throughout the world – yet, most of the time, people don’t care to see them. Films are big, complex, and frequently flawed; there is no such thing as a perfect film, and, more often than not, a film’s flaws outweigh its strengths, so the film fails. This wasn’t an issue before the advent of television – before 1947, film was the only way to enjoy the moving image. Over the last sixty years, the film industry has learned how to accommodate television – with cable and free-to-air broadcasts of their films, and, most profitably, with the huge industry created by the VCR and the DVD. Even so, in the era of the VCR viewers had perhaps five or six channels of broadcast television to choose from. When the DVD was introduced, viewers had perhaps fifty or sixty channels to watch – more substantial, but still nothing to be entirely worried about. Now the number of potential viewing choices is essentially infinite. In a burst of exponential growth, the video sharing site YouTube is about to surpass CNN in web traffic, and in just one week went from 35 million videos viewed to over 40 million. That kind of growth is clearly unsustainable, but it’s also just as clearly indicative that YouTube is becoming a foundation web service, as significant as Google or Wikipedia. And this is why copyright doesn’t matter.IIIIt’s frequently noted that much of the content up on YouTube is presented in violation of someone else’s copyright. It might be little snippets from South Park, The Daily Show, or Saturday Night Live. The media megacorporations who control those copyrights are constantly in contact with YouTube, asking them to remove this content as quickly as it appears – and YouTube is happy to oblige them. But YouTube is subject to “swarm effects,” so as soon as something is removed, someone else, from somewhere else, posts it again. Anything that is popular has a life of its own – outside of its creator’s control – and YouTube has become the focal point to express this vitality.At the moment, many of the popular videos on YouTube fall into this category of content-in-violation-of-copyright. But not all of them. There’s plenty on YouTube which has been posted by people who want to share their work with others. A lot of this is instructional, informational, or just plain odd. It’s outside the mainstream, was never meant to be mainstream, and yet, because it’s up there, and because so many people are looking to YouTube for a moment’s diversion or enlightenment, it tends, over time, to find its audience. Once something has found just one member of its audience, it’s quickly shared throughout that person’s social network, and rapidly reaches nearly the entirety of that audience. That’s the find-filter-forward operation of a social network in an era of hyperdistribution and microaudiences. YouTube is enabling it. That’s why YouTube has gotten so popular, so quickly: it’s filling an essential need for the microaudience.Is there a place for professionally-produced content in an age of social networks and microaudiences? This is the big question, the question that no one can answer, because the answer can only emerge over time. Attention is a zero-sum: if I’m watching this video on YouTube, I’m not watching that TV show or movie. If I’m thoroughly caught up in the five YouTube links I get sent each day – which will quickly become fifty, then five hundred – how can I find any time to watch the next Hollywood special effects extravaganza? And why would I want to? It’s not what my friends are watching: they’ve sent me links to what they’re watching – and that’s on YouTube.So go ahead, Congress: kill the entertainment industry by doing their bidding. Let them lock their content up so completely that its utility – with respect to the network – approaches zero. If people can’t find-filter-forward content, it won’t exist for them. Lock something up, and it becomes less and less important, until no one cares about it at all. People are increasingly concerned with the media they can share freely, and this points to a future where the amateur trumps the professional, because the amateur understands the first economic principle of hyperdistribution: the more something is shared, the more valuable it becomes.

The Ecologies of Wikipedia

I

Last week, another bombshell exploded in the slow, cold war between Encyclopedia Britannica and Wikipedia. Toward the end of 2005, the prestigious British science journal, Nature, conducted an analysis of a number of articles in both Britannica and Wikipedia, and proved that, on the whole, both publications seemed equally accurate – and equally inaccurate. Now Britannica claims that Nature cooked their data – there’s been a fair bit of that going on in the scientific community lately – and that, in fact, they’re far more accurate than upstart Wikipedia.

Does anybody care?

For a brief shining moment in 1999, Encyclopedia Britannica was freely available online in its full glory, and it was good. So good, in fact, that the site crashed shortly after its launch – many, many people wanted the high-quality facts in Britannica. These millions of visitors quickly overloaded its servers. Britannica upgraded its web infrastructure, relaunched itself, and quickly became one of the hundred-or-so most popular sites on the web.

Somehow Britannica managed to lose money. The reasons for this colossal case of business malfeasance remain shrouded in mystery: web-based advertising paid more in 1999 than it did in 2001. With a steady stream of millions of visitors a day, it shouldn’t be hard to earn a fair bit of money. Yahoo! does it. Google does it. Somehow, Britannica couldn’t, or wouldn’t, and this failure was the death of Britannica. For, late in 2000, Britannica locked itself behind a “walled garden,” charging subscribers $6.95 a month for access, promptly losing nearly 100% of their audience. Even worse, they laid the ground for the perfect competitor: Wikipedia.

I first heard of Wikipedia in January of 2002, at a conference celebrating the life and achievements of Douglas Engelbart, who gave the world the mouse, hypertext, computer videoconferencing, and did it all nearly 40 years ago. Engelbart believes that computers can be used as “intelligence amplifiers,” and he’s spent his lifetime developing techniques to make us all more intellectually productive. But early in 2002, even as we celebrated his life, Engelbart seemed downcast, preoccupied with the failure of the World Wide Web to live up to his expectations as a medium for intelligence amplification. Yet – during a series of presentations about way his work has influenced others – one of the presenters showed a nifty web technology known as “Wiki”. The Hawai’ian word for “quick,” a wiki is really nothing more than a technology which supports editable web pages. That seems simple enough, but the overall effect is profound: with Wiki, every web page becomes dynamic, and every person who visits a web page can leave their own mark upon it. Instead of performing as a presentation medium, like a newspaper, Wiki turned the web – potentially the entire web – into a palimpsest. Erase what you don’t like on a web page, and begin again. Then multiply that by the billion-plus web pages in the world of 2002.

Somewhere during that talk, the presenter showed us “Wikipedia” – an editable encyclopedia – as a shining example of the power of Wiki technology. At the time, Wikipedia had about 40,000 entries on a fairly broad range of topics. Hardly comprehensive, but not bad for a freely available and user-created resource.

As of today, Wikipeda has one million, fifty four thousand, two hundred and eight articles in the English language, and nearly double that count if all entries in all languages are added together. That’s about eight times the size of Britannica. It seems that people aren’t just hungry for facts: they’re more than willing to add their own facts to the commonweal. Britannica, obsessed with rapidly-obsolescing models of professional knowledge production, sees Wikipedia as little more than a million monkeys typing at a million keyboards. Wikipedia lacks every time-honored methodology of review, fact checking and careful consideration which Britannica considers essential to the creation of an encyclopedia. Yet, somehow, Wikipedia works – and works far better than Britannica. Late in 2005 Wikipedia zoomed into the top twenty most-visited websites in the world. Despite Britannica‘s critiques, the online world has voted with its virtual feet.

The question has never been “Is Britannica better than Wikipedia?” The question – always, and only – is this: is Wikipedia good enough? The answer, it’s now apparent, is a resounding yes. Instant access to the entries of Wikipedia – even if slightly less accurate – always trumps the careful and carefully protected information of Britannica. In some ways, this is another example of Gilmore’s Law at work: Britannica got locked up – damaged, if you will – so the net routed around it by creating an effective alternative.

II

This week I became a person. Oh, I had an existence before, circles of friends, some prominence in the Australian media, but I lacked that essential signifier of individual presence in the 21st century – my own Wikipedia entry. I had considered creating my own entry, but could never bring myself to do it, out of some combination of humility and fear. How can you write about yourself with any accuracy? What if others learn you wrote your own Wikipedia entry? Wikipedia exposes the editing history of every entry – so a self-created entry wouldn’t look very good. Instead, I talked one of my friends into doing it.

This wasn’t pure vanity on my own part; I am mentioned at several points in Wikipedia – most notably in the articles on VRML and “peer-to-peer (meme)” – both of which reflect some of the significant work I’ve done over the last decade. Given these mentions, it made sense to have my own Wikipedia entry. So, as a birthday present, my friend Gregory Pleshaw offered to author my Wikipedia entry. I was happy to accept, and offered him all the support he might need.

In early 2006, Gregory created a basic page about me. Within 18 hours, that page no longer existed on Wikipedia, having been wiped out by one of the wizards of Wikipedia, the mavens who make it their life’s work to see that standards of content and factuality are maintained within Wikipedia. I fired off a message to the wizard in question, and asked her why the page had been deleted. She, in turn, pointed me to an extensive set of guidelines for biographical pages for living persons. Not everyone can have their own Wikipedia entry; you have to be a person of some renown. There are different ways of measuring that fame, such as whether you’re regularly written about in the media, or the number of hits a Google of your name returns, etc. The article itself must clearly lay out the reason(s) why this person is well known. Then – and only then – will the biographical entry be accepted into Wikipedia.

Here’s the first thing that most people don’t yet know about Wikipedia: it’s not just a dumping ground for random facts. There are standards, and there are several thousand people who make it their business to keep Wikipedia clean and pure and up to the standards developed by the Wikipedia community for the good of the community. If an entry doesn’t meet those quality requirements, it gets flushed. Different pages have different requirements: an entry about an obscure theorem of mathematics can drone on in decidedly technical language – with lots of nice mathematical formulae – while an entry on a more prosaic subject must be written clearly, accessibly, and must link broadly both into Wikipeida and other sources.

Wikipedia is evolving its own internal ecology of standards and practices, and these are necessarily embodied both in its editors and its readers – given that the line between an editor and a reader is modal. I can flip back and forth between self-as-editor and self-as-reader several times a minute. Most pages I would never want to edit – except to correct an obvious spelling mistake or grammatical error – but some pages, such as my own, or on subjects where I know I possess some recognized authority, demand my own input. I apply my own expertise, and improve Wikipedia. Some others have taken on Wikipedia as a whole – these are the wizards who are recognized authorities on Wikipedia itself – and they add the necessary ecology of self-reflection which makes the whole process so successful.

Thus, we’re seeing the emergence of a “priestly class” of Wikipedians, who have been judged by their peers as experts in Wikipedia – initiates into the mysteries – and hence have been given the keys to the kingdom: the ability to delete unsatisfactory entries or edits. This means that Wikipedia has become a civilization in its own right – after all, we judge the birth of civilization in the Near East by the emergence of the priestly classes in Egypt and Mesopotamia. These priests gave us writing and arithmetic and centralized worship – necessary innovations for civilization.

The downside of this is obvious: the priestly class places all of the rest of us at a remove from Wikipedia. We can create, but they have the power to expunge the record, cleansing it of apostasy. They are developing their own bureaucracy, their own arcana, their own mysteries. As time goes on, it will become ever more difficult to fathom their logic. Increasingly, we will rely on faith alone: faith that the priests have got it right, that this collective project of human civilization is headed in the right direction. And that means we’ll need to find individuals who can act as intercessors between ourselves and the gods.

III

When Gregory Pleshaw and I first discussed the creation of my Wikipedia entry, he mentioned that he might offer this as a service to some of his clients. I had a flash, and in that moment foresaw the future of Wikipedia, the next logical point in its evolution: Wikipedia is about to become a commercial ecology. We will soon see the rise of a professional class of Wikipedians, who – because of their professional relationships – can never be part of the priestly class, for fear of conflicts of interest, but who will make it their life’s work to create high-quality entries for those individuals and organizations which recognize the importance of Wikipedia as the definitive human reference work.

While I contend that this will be honest work, there will undoubtedly be any number of communities that will object to this proposal, because it seems to strike directly against the idea of a free and freely available resource. First and foremost among the protesters will likely be the priestly class of Wikipedians, who might see the emergence of any commercial ecology as a “pollution” of the ideas of Wikipedia. Nothing could be further from the truth: in fact, a commercial ecology is the ultimate validation of Wikipedia. If Wikipedia is so important, so powerful, so vital that people can earn a living by writing high-quality Wikipedia entries, doesn’t that mean Wikipedia has met its own standards of quality? A professionally-created entry isn’t locked away. It isn’t owned. Like everything else in Wikipedia, it is given away freely under a Creative Commons license. Even though someone paid a professional to create an entry, that rule doesn’t change. In effect, they’re paying to create something that will be given away. That, as I’ve already written elsewhere, is a smart 21st-century business practice.

I suspect that we already have a professional class of Wikipedians among us; but, because of the nearly religious tenor of the open-source ethos that surrounds Wikipedia, they’re keeping a very low profile. Some of these folks are in Marketing & Communications agencies, managing Wikipedia as part of a client’s online-presence and branding. Some others, like Gregory Pleshaw, are free-lancers who have stumbled into this newest commercial ecology of the Information Age. But make no mistake: these people already exist. Wikipedia is too powerful, too important, and too central to our lives for it to be any other way.

A final point: this is an example of a professional class emerging from a strictly amateur ecology. The amateur ecology won’t fundamentally change: all of us can continue to be Wikipedia dilettantes, dropping in occasionally to correct a word or add a sentence. But now, we amateurs will face real competition from professional Wikipedians, who have studied the mysteries of Wikipedia, and mastered them. That competition can only improve Wikipedia, making it even more comprehensive, reliable and indispensable.