Understanding Gilmore’s Law: Telecoms Edition

OR,
How I Quit Worrying and Learned to be a Commodity

Introduction


“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:



From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

Rearranging the Deck Chairs

I. Everything Must Go!

It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.

The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.

On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.

II. The Vanilla Republic

It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.

When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.

In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.

This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.

III. My Left Eyeball

Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.

Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.

While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.

I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.

Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.

The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.

One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.

Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.

Herding Cats

That which governs least governs best. – Thomas Paine
I.Nothing is perfect. Everything, this side of heaven, contains a flaw. The master rug makers of Persia go so far as to add a mistaken stitch into their carpets; perfection would be an insult to the greatness of God. For nearly everything else, and for nearly everyone else, we don’t have to worry about adding errors: we work from incomplete knowledge, we work from ignorance, and we work from prejudice. As Mark Twain noted, “It’s not what you don’t know that’ll hurt you, but what you know that ain’t so.” We believe we know so much; in truth we know nearly nothing at all. We have trouble discerning our own motivations – yet we constantly judge the motivations of others. Cognitive scientists have repeatedly demonstrated how we backfill our own memories to create a comfortable and pleasing narrative of our lives; this keeps us from drowning in despair, but it also allows us to be monsters who have no trouble sleeping soundly at night.

We constantly and impudently impugn the motives of others, carrying that attitude into the designs of systems which support community. We protect children from pedophiles; we protect ourselves from unsolicited emails; we protect communities from the excesses of emotion or behavior which – we believe – would rip them apart. Each of these filtering processes – many of them automated – serve to create a “safe space” for conversation and community. Yet community is at least as much about difference as it is about similarity. If every member of a community held to a unity of thought, no conversation would be possible; information – “the difference which makes a difference” – can only emerge from dissent. Any system which diminishes difference therefore necessarily diminishes the vitality of a community. Every act of communication within a community is both an promise of friendship and a cry to civil war. Every community sails between the Scylla and Charybdis of undifferentiated assent and complete fracture.

When people were bound by proximity – in their villages and towns – the pressure for the community to remain cohesive prevented most of the egregious separations from occurring, though periodically – and particularly since the Reformation – communities have split apart, divided on religious or ideological lines. In the post-Enlightenment era, with the opening of the Americas, divided communities could simply move away, and establishing their own particular Edens, though these too could fracture; schism follows schism in an echo of the Biblical story of the Confusion of Tongues. Rural communities could remain singular and united (at least, until they burst apart under the build up of pressures), but urban polities had to move in another direction: tolerance. Amsterdam and London flourished in the eighteenth century because of the dissenting voices they tolerated in their streets. It was either that, or, as both had learned – to their horror – endless civil wars. This essential idea of the Enlightenment – that men could keep their own counsel, so long as they respected the beliefs of others – fostered democracy, science, capitalism and elevated millions from misery and poverty. It is said that democratic nations never wage war against one another; while not entirely true, tolerance acts as a firewall against the most immediate passions of states. The alternative – repeated countless times throughout the 20th century – is a mountain of skulls.

Where people are connected electronically, freed both from the strictures of proximity and the organic and cultural bounds of propriety that accompany face-to-face interactions (it is much easier to be rude to someone that you’ve never met in the flesh) the natural tendency to schism is amplified. The checks against bad behavior lose their consequential quality. One can be rude, abrasive, even evil, because the mountain of skulls which pile up as the inevitable result of such psychopathology appear to lack the immediacy of a real, bleeding body. It has been argued that we need “to be excellent to each other,” or that we need to grow thicker skins. Both suggestions have some merit, but the truth lies somewhere in between.

II.

While USENET, the thirty year old, Internet-wide bulletin board system remains the archetype for online community – the place where the terms “flame”, “flame war” and “troll” originated in their current, electronic usages – USENET has been long since been obsolesced by a million dedicated websites. We can learn a lot about the pathology of online communities by studying USENET, but the most important lesson we can draw involves the original online schism. In 1987, John Gilmore – one of the founding engineers of SUN Microsystems – wanted to start a USENET list to discuss topics related to illegal psychoactive drugs. USENET users must approve all requests for new lists, and this highly polarizing topic, when put to a vote, was repeatedly rejected. Gilmore spent a few hours modifying the USENET code so that it could handle a new top-level hierarchy, “alt.*” This was designed to be the alternative to USENET, where anyone could start a list for any reason, anywhere. While many USENET sites tried to ban the alt. hierarchy from their servers, within a year’s time alt. became ubiquitously available. Everyone on USENET had a passion for some list which couldn’t be satisfied within its strict guidelines. To this day, the tightly moderated USENET and free-wheeling, often obscene, and frequently illegal alt. hierarchy coexist side-by-side. Each has reinforced the existence of the other.

Qualities of both USENET and the alt. hierarchy have been embodied in the peer-produced encyclopedia-about-everything, Wikipedia. Like the alt. hierarchy, anyone can create an entry on any subject, and anyone can edit any entry on any subject (with a few exceptions, discussed below). However, like USENET, there are Wikipedia moderators, who can choose to delete entries, or roll back the edits on entry, and who act as “governors” – in the sense that they direct activity, rather than ruling over it (this from the original Greek kybernetes, from which we get “cybernetics,” and meaning “steersman”). By any objective standard the system has worked remarkably well; Wikipedia now has nearly 1.5 million English-language articles, and continues growing at a nearly exponential rate. The strength of the moderation in Wikipedia is that it is nearly invisible; although articles do get deleted because they do not meet Wikipedia’s evolving standards (e.g., the first version of a biographical page about myself) it remains a triumph of tolerance, carefully maintaining a laissez-faire approach to the creation of content, applying a moderating influence only when the broad guidelines of Wikipedia (summed up in the maxim “don’t be a dick”) have been obviously violated. The community feels that it has complete control over the creation of content within Wikipedia, and this sense of investment – that Wikipedia truly is the product of the community’s own work – has made Wikipedia’s contributors its most earnest evangelists.

There is a price to be paid for this open-door policy: noise. Because Wikipedia is open to all, it can be vandalized, or filled with spurious information. While the moderators do their best to correct instances of vandalism, Wikipedia relies on the community to do this nitpicking work. (I have deleted vandalism on Wikipedia pages several times.) For the most part, it works well, though there are specific instances – such as on 31 July 2006, when Steven Colbert urged viewers of his television program to modify Wikipedia entries to promote his own “political” views – when it falls down utterly. Wikipedia can withstand the random assaults of individuals, but, in its present form, it can not hope to stand against thousands of individuals intent on changing its content in specific areas. Thus, in certain circumstances, Wikipedia moderators will “lock” certain entries, allowing them to be modified only by carefully designated individuals. Although Colbert meant his assault as a stunt, with no malicious intent, he pointed to the serious flaw of all open-door systems – they rely on the good faith of the vast majority of their users. If any polity decides to take action against Wikipedia, the system will suffer damage.

With a growing consciousness of the danger of open-door systems – and a sense that perhaps more moderation is better – Wikipedia cofounder Larry Sanger has launched his own competitor to Wikipedia, Citizendium. Starting with a “fork” of Wikipedia (that is, a selection of the entries thought “suitable” for inclusion in the new work), Citizendium will restrict posting in its entries to trusted experts in their fields. The goal is to create a higher-quality version of Wikipedia, with greater involvement from professional researchers and academics.

While a certain argument can be made that Wikipedia entries contain too much noise –many are poorly written, have no references, or even project a certain point-of-view – it remains to be seen if any differentiation between “professional” and “amateur” communities of knowledge production can be maintained in an era of hyperdistribution. If a film producer is now threatened by the rise of the amateur – that is, an enthusiast working outside the established systems of media distribution – won’t an academic (and by extension, any professional) also be under threat? The academy has always existed for two reasons: to expand knowledge, and to restrict it. Academic communities function under the same rules of all communities, the balancing act between uniformity and schism. The “standard bearers” in any community reify the orthodox tenets of any field, blocking the research of any outsiders whose work might threaten the functioning assumptions of the community. Yet, since T.S. Kuhn published The Structure of Scientific Revolutions we know that science progresses (in Max Planck’s apt phrasing) “funeral by funeral.” Experts tend to block progress in a field; by extension any encyclopedia which uses these same experts as the gatekeepers to knowledge aquisition will effectively hamstring itself from first principles. In the age of hyperintelligence, expertise has become a broadly accessible quality; it is not located in any particular community, but rather in individuals who may not be associated with any official institution. Noise is not the enemy; it is a sign of vitality, and something that we must come to accept as part of the price we pay for our newly-expanded capabilities. As Kevin Kelly eloquently expressed in Out of Control, “The perfect is the enemy of the good.” The question is not whether Wikipedia is perfect, but rather, is it good enough? If it is – and that much must be clear by now – then Citizendium, as an attempt to make perfect what is already good enough, must be doomed to failure, out of tune with the times, fighting the trend toward the era of the amateur.

As Citizendium flowers and fails over the next year, it will be interesting to note how its community practices change in response to an ever-more-dire situation. The pressures of the community will force Citizendium to become more Wikipedia-like in its submissions and review policies. At the same time, additional instances of organized vandalism (we’ve only just started to see these) will drive Wikipedia toward a more restrictive submissions and editing policy. Citizendium overshot the mark from the starting line, and will need to crawl back toward the open-door policy, yet, as it does, it risks alienating the same experts it’s designed to defend. Wikipedia, starting from a position of radical openness, has only restricted access in response to some real threat to its community. Citizendium is proactive and presumes too much; Wikipedia is reactive (and for this reason will occasionally suffer malicious damage) but only modifies its access policies when a clear threat to the stability of the community has been demonstrated. Wikipedia is an anarchic horde, moving by consensus, unlike Citizendium, which is a recapitulation of the top-down hierarchy of the academy. While some will no doubt treasure the heavy moderation of Citizendium, the vast majority will prefer the noise and vitality of Wikipedia. A heavy hand versus an invisible one; this is the central paradox of community.

III.

A well-run online community walks a narrow line between anarchy and authoritarianism. To encourage discussion and debate, a community must be encouraged to sit on a hand grenade that always threatens to explode, but never quite manages to go off. In general, it’s quite enough to put people into the same conversational space, and watch the sparks fly; stirring the pot is rarely necessary. Conversely, when the pot begins to boil over, someone has to be on hand to turn the heat down. Communities frequently manage this process on their own, with cool minds ready to reframe conversation in less inflammatory terms. This wisdom of communities is not innate; it is knowledge embodied within a community’s practices, something that each community must learn for itself. USENET lists, over the course of thirty years, have learned how to avoid the most obvious hot-button topics, and regular contributors to these lists have learned to filter out the outrageous flame-baiting of list trolls. But none of this community intelligence resides in a newly-founded community, so, in an absolute sense, the long-term health of any community depends strongly on the character and capabilities of its earliest members.

The founding members of a new community should not be arbitrarily selected; that would be gambling on the good behavior of individuals who, insofar as the community is concerned, have no track record. Instead, these founders need to be carefully vetted across two axes of significance: their ability to be provocative, and their capability to act like adults. These qualities usually don’t come as a neat package; any individual who has a surfeit of one is more than likely to be lacking in the other. However, once such “balanced” individuals have been identified and recruited, the community can begin its work.

After a time, the best of these individuals – whose qualities will become clear to the rest of the community – should be promoted to moderator status, assuming the Solomonic mantle as protectors and guardians of the community. This role is vital; a community should always know that they are functioning in a moderated environment, but this moderation should be so light-handed as to be nearly invisible. The presumption of observation encourages individuals to behave appropriately; the rare examples when a moderator is forced to act as a benevolent and trustworthy force for good should encourage imitation.

Hand-in-hand with the sense of confidence which comes from careful and gentle moderation, a community must feel empowered to create something that represents both their individual and collective abilities. The idea of “ownership,” when multiplied by a community-recognized sense of expertise, produces a strongly reinforcing behavior. Individuals who are able to share their expertise with a community – and help the community build its own expertise – will develop a very strong sense of loyalty to the community. Expertise can be demonstrated in the context of a bulletin board system, but these systems do not easily adapt themselves to the total history of interactions experts have within the community. A posting made today is lost in six months’ time; a Wiki is forever. Thus, in addition to conversation – and growing naturally from it – the community should have the tools at its disposal to translate its conversation into something more permanent. Community members will quickly recognize those within its ranks who have the authority of expertise on any given subject, and they should be gently guided into making a record of that expertise. As that record builds, it develops a value of its own, beyond its immense value as a repository of expertise; it becomes the living embodiment of an individual’s dedication to the community. Over time, community members will come to see themselves as the true “content” of the community, both through their participation in the endless conversation of the community, and as the co-creators of the community’s collective intelligence.

This model has worked successfully for over a decade in some of the more notable electronic communities – particularly in the open-source software movement. The various communities around GNU/Linux, PHP and Python have all demonstrated that any community with room enough to pool the expertise of large numbers of dedicated individuals will build something of lasting value, and bring broad renown to its key contributors, moderators, and enthusiasts.

However, even in the most effective communities, schism remains the inevitable human tendency, and some conflicts can not always be resolved, drawn from deep-seated philosophical or temperamental differences. Schism should not be embraced arbitrarily, but neither should it be avoided at all costs. Instead – as in the case of the alt. hierarchy – room should be made to accommodate the natural tendency to differentiate. Wikipedia will eventually fork into a hundred major variants, of which Citizendium is but the first. The LINUX world has been divided into different distributions since its earliest years. Schism is a sign of life, indicating that there is something important enough to fight over. Schisms either resolve in an ecumenical unity, or persist and continue to divide; neither outcome is inherently preferable.

Every living thing struggles between static order and chaotic dissolution; it isn’t perfect, but then, nothing ever is. Even as we feel ourselves drawn to one extreme or another, wisdom wrought from experience (often painfully gained) checks our progress, and guides us forward, delicately, into something that is, in the best of worlds, utterly unexpected. The potential for novelty in any community is enormous; releasing that potential requires flexibility, balance, and presence. There are no promises of success. Like a newborn child, a new community is all potential – unbounded, unbridled, standing at the cusp of a unique wonder. We can set its feet on the path of wisdom; what comes after is unknowable, and, for this reason, impossibly potent.