Flexible Futures

I: A Brief Tour of the Future

During my first visit to Sydney, in 1997, I made arrangements to catch up with some friends living in Drummoyne.  I was staying at the Novotel Darling Harbour, so we agreed to meet in front of the IMAX theatre before heading off to drinks and dinner.  I arrived at the appointed time, as did a few of my friends.  We waited a bit more, but saw no sign of the missing members of our party.  What to do?  Should we wait there – for goodness knows how long – or simply go on without them?

As I debated our choices – neither particularly palatable – one of my friends took a mobile out of his pocket, dialed our missing friends, and told them to meet us at an Oxford Street pub.  Crisis resolved.

Nothing about this incident seems at all unusual today – except for my reaction to the dilemma of the missing friends.  When someone’s not where they should be, where they said they would be, we simply ring them.  It’s automatic.

In Los Angeles, where I lived at the time, mobile ownership rates had barely cracked twenty percent.  America was slow on the uptake to mobiles; by the time of my visit, Australia had already passed fifty percent.  When half of the population can be reached instantaneously and continuously, people begin to behave differently.  Our social patterns change.  My Sydneysider friends had crossed a conceptual divide into hyperconnectivity, while I was mired in an old, discrete and disconnected conception of human relationships.

We rarely recall how different things were before everyone carried a mobile.  The mobile has become such an essential part of our kit that on those rare occasions when we leave it at home or lose track of it, we feel a constant tug, like the phantom pain of a missing limb.  Although we are loath to admit it, we need our mobiles to bring order to our lives.

We can take comfort in the fact that all of us feel this way.  Mobile subscription rates in Australia are greater than one hundred and twenty percent – more than one mobile per person, one of the highest rates in the world.  We have voted with our feet, with our wallets and with our attention.  The default social posture in Australia – and New Zealand and the UK and the USA – is face down, absorbed in the mobile.  We stare at it, toy with it, play on it, but more than anything else, we reach through it to others, whether via voice calls, text messages, Facebook, Twitter, or any of an constantly-increasing number of ways.

The mobile takes the vast, anonymous and unknowable world, and makes it pocket-sized, friendly and personal.  If we ever run into a spot of bother, we can bring resources to hand – family, friends, colleagues, even professional fixers like lawyers and doctors – with the press of ten digits.  We give mobiles to our children and parents so they can call us – and so we can track them.  The mobile is the always-on lifeline, a different kind of 000, for a different class of needs.

Because everyone is connected, we can connect to anyone we wish.  These connections needn’t follow the well-trodden paths of family, friends, neighbors and colleagues.  We can ignore protocol and reach directly into an organization, or between silos, or from bottom to top, without obeying any of the niceties described on org charts or contact sheets.  People might choose to connect in an orderly fashion – when it suits them.  Generally, they will connect to their greatest advantage, whether or not that suits your purposes, protocols, or needs.  When people need a lifeline, they will turn over heaven and earth to find it, and once they’ve found it, they will share it with others.

Connecting is an end in itself – smoothing our social interactions, clearing the barriers to commerce and community – but connection also provides a platform for new kinds of activities.  Connectivity is like mains power: once everywhere, it becomes possible to imagine a world where people own refrigerators and televisions.

When people connect, their first, immediate and natural response is to share.  People share what interests them with people they believe share those interests.  In early days that sharing can feel very unfocused.  We all know relatives or friends who have gone online, gotten overexcited, and suddenly start to forward us every bad joke, cute kitten or chain letter that comes their way.  (Perhaps we did these things too.)  Someone eventually tells the overeager sharer to think before they share.  They learn the etiquette of sharing.  Life gets easier (and more interesting) for everyone.

As we learn who wants to know what, we integrate ourselves into a very powerful network for the dissemination of knowledge.  If it’s important to us, the things we need to know will filter their way through our connections, shared from person to person, delivered via multiple connections.  In the 21st century, news comes and finds us.  Our process of learning about the world has become multifocal; some of it comes from what we see and those we meet, some from what we read or watch, and the rest from those we connect with.

The connected world, with its dense networks, has become an incredibly efficient platform for the distribution of any bit of knowledge – honest truth, rumor, and outright lies.  Anything, however trivial, finds its way to us, if we consider it important.   Hyperconnectivity provides a platform for a breadth of ‘situational awareness’ beyond even the wildest imaginings of MI6 or ASIO.

In a practical sense, sharing means every employee, no matter their position on the org chart, can now possess a detailed awareness your organization.  When an employee trains their attention on something important to them, they see how to connect to others sharing similar important information.

We begin by sharing everything, but as that becomes noisy (and boring), we focus on sharing those things which interest us most.  We forge bonds with others interested in the same things.  These networks of sharing provide an opportunity for everyone to involve themselves fully within any domain deemed important – or at least interesting.  Each sharing network becomes a classroom of sorts, where anyone expert in any area, however peculiar, becomes recognized, promoted, and well-connected.  If you know something that others want to know, they will find you.

In addition to everything else, we are each a unique set of knowledge, experience and capabilities which, in the right situation, proves uniquely valuable.  By sharing what we know, we advertise our expertise.  It follows us where ever we go.   Because this expertise is mostly hidden from view, it is impossible for us to look at one another and see the depth that each of us carries within us.

Every time we share, we reveal the secret expert within ourselves.  Because we constantly share ourselves with our friends, family and co-workers, they come to rely on what we know.  But what of our colleagues?  We work in organizations with little sense of the expertise that surrounds us.

Before hyperconnectivity, it was difficult to share expertise.  You could reach a few people – those closest to you – but unless your skills were particularly renowned or valuable, that’s where it stopped.  For good or ill, our experience and knowledge now  extend far beyond the circle of those familiar to you, throughout the entire organization.  Everyone in it can now have some awareness of the talents that pulse through your organizations – with the right tools in place.


II: Mobility & Flexibility

Everyone now goes everywhere with a mobile in hand.  This means everyone is continually connected to the organization.  That has given us an office that has no walls, one which has expanded to fill every moment of our lives.  We need to manage that relationship and the tension between connectivity and capability.  People can not always be available, people can not always be ‘on’.  Instead, we must be able to establish boundaries, rules, conventions and practices which allow us to separate work from the rest of our lives, because we can no longer do so based on time or location.

We also need some way to be able to track the times and places we do work.  We’re long past the days of punching a timeclock.  In a sense, the mobile has become the work-whistle, timeclock and overseer, because it is the monitor.  This creates another tension, because people will not be comfortable if they believe their own devices are spying on them. Can organizations walk a middle path, which allows the mobile to enable more employee choice and greater freedom, without eternally tethering the employee to the organization?

This is a policy matter, not a technology matter, but technology is forcing the hand of policy.  How can technology come to the aid of that policy?  How can I know when it might be appropriate to contact an employee within my organization, and when it would be right out?  This requires more than a quick glance at an employee schedule.  The employee, mobile in hand, has the capacity to be able to ‘check in’ and ‘check out’ of availability, and will do so if it’s relatively effortless.  Employees can manage their own time more effectively than any manager, given the opportunity.

It’s interesting to note that this kind of employee-driven ‘flextime’ has been approaching for nearly thirty years, but hasn’t yet arrived.  Flextime has proven curiously inflexible.  That’s a result of the restricted communication between employee and organization, mostly happening within the office and within office hours.  Now that communication is continuous and pervasive, now that the office is everywhere, flextime policies must be adjusted to accommodate the continuously-evolving needs of the organization’s employees.  The technology can support this – and we’re certainly flexible enough.  So these practices must come into line with our capabilities.

As practice catches up with technology, we need to provide employees with access to the tools which they can use to manage their own work lives.  This is the key innovation, because empowering employees in this way creates greater job satisfaction, and a sense of ownership and participation within the organization.  Just as we can schedule time with our friends or pursuing our hobbies, we should be able to manage our work lives.

Because we rely so heavily on mobiles, we lead very well-choreographed lives.  Were we to peek at a schedule, our time might look free, but our lives have a habit of forming themselves on-the-fly, sometimes only a few minutes in advance of whatever might be happening.   We hear our mobile chime, then read the latest text message telling us where we should be – picking up the kids, going to the shops, heading to a client.  Our mobiles are already in the driver’s seat.  Fourteen years ago, when I sat at Darling Harbour, waiting for my late friends, we had no sense that we could use pervasive mobile connectivity to manage our schedules and our friends’ schedules so precisely.  Now, it’s just the way things are.

Do we have back office practices which reflect this new reality?  Can an employee poke at their mobile and know where they’re expected, when, and why?  By this, I don’t mean calendaring software (which is important), but rather the rest of the equation, which allows employee and employer to come to a moment-by-moment agreement about the focus of that employment.

This is where we’re going.  The same processes at work in our private lives are grinding away relentlessly within our organizations.  Why should our businesses be fundamentally more restrictive than our family or friends, all of whom have learned how to adapt to the flexibility that the mobile has wrought?  This isn’t a big ask.  It’s not as though our organizations will tip into chaos as employees gain the technical capacity to manage their own time.  This is why policy is important.  Just because anything is possible doesn’t mean it’s a good idea.  Hand-in-hand with the release of tools must come training on how these tools should be used to strengthen an organization, and some warnings on how these same tools could undermine an organization whose employees put their own needs consistently ahead of their employer.

Once everyone has been freed to manage their own time, you have a schedule that looks more like Swiss cheese than the well-ordered blocks of time we once expected from the workforce.  Every day will be special, a unique arrangement of hours worked.  Very messy.  You need excellent tracking and reporting tools to tell you who did what, when, and for how long.  Those tools are the other side of the technology equation; give employees control, and you create the demand for a deeper and more comprehensive awareness of employee activities.

Managers can’t spend their days tracking employee comings and goings.  As our schedules become more flexible and more responsive to both employee and organizational needs, the amount of information a manager needs to absorb becomes prohibitive.  Managers need tools which boil down the raw data into easily digestible and immediately apprehensible summaries.

Not long ago, I did quite a bit of IT consulting for a local council, and one thing I heard from the heads of each of the council’s departments, was how much the managers at the top needed a ‘dashboard’, which could give them a quick overview of the status of their departments, employee deployment, and the like.  A senior executive needs to be able to glance at something – either on their desktop computer, or with a few pokes on their mobile – and know what’s going on.

Something necessary for the senior management has utility throughout the organization.  The kind of real-time information that makes a better manager also makes a better organization, because employees who have access to real-time updates can change their own activities to meet rising demands.  The same flexibility which allows employees to schedule themselves also creates the opportunity for a thoroughly responsive and reconfigurable workforce, able to turn on a dime, because it is plugged in and well-aware of the overall status of the organization.

That’s the real win here; employees want flexibility to manage their own lives, and organizations need that flexibility to able to respond quickly to both crises and opportunities.  The mobile empowers both employee and organization to meet these demands, provided there is sufficient institutional support to make these moment-to-moment changes effortless.

This is a key point.   Where there is friction in making a change, in updating a schedule, or in keeping others well-informed, those points of friction become the pressure points within the organization.  An organization might believe that it can respond quickly and flexibly to a crisis, only to find – in the midst of that crisis – that there is too much resistance to support the organizational demand for an instant change of focus.  An organization with too much friction divides its capabilities, becoming less effective through time, while an organization which has smoothed away those frictions multiplies its capabilities, because it can redeploy its human resources at the speed of light.

Within a generation, that’s the kind of flexibility we will all expect from every organization.  With the right tools in hand, it’s easy to imagine how we can create organizations that flow like water while remaining internally coherent.  We’re not there yet, but the pieces are now in place for a revolution which will reshape the organization.


III: Exposing Expertise

It’s all well and good to have a flexible organization, able to reconfigure itself as the situation demands, but that capability is useless unless supported by the appropriate business intelligence.  When a business pivots, it must be well-executed, lest it fly apart as all of the pieces fall into the wrong places, hundreds of square pegs trying to fill round holes.

Every employee in an organization has a specific set of talents, but these talents are not evenly distributed.  Someone knows more about sales, someone else knows more about marketing, or customer service, or accounting.  That’s why people have roles within an organization; they are the standard-bearers for the organization’s expertise.

Yet an employee’s expertise may lie across several domains.  Someone in accounting may also provide excellent customer service.  Someone in manufacturing might be gifted with sales support.  A salesman might be an accomplished manager.  People come into your organization with a wide range of skills, and even if they don’t have an opportunity to share them as part of their normal activities, those skills represent resources of immense value.

If only we knew where to find them.

You see, it isn’t always clear who knows what, who’s had experience where, or who’s been through this before.  We do not wear our employment histories on our sleeves.  Although we may enter an organization with our c.v. in hand, once hired it gets tucked away until we start scouting around for another job.  What we know and what we’ve done remains invisible.  Our professional lives look a lot like icebergs, with just a paltry bit of our true capabilities exposed to view.

One of the functions of a human resources department is to track these employee capabilities.  Historically, these capabilities have been strictly defined, with an appropriately circumscribed set of potentials.  Those slots in the organization are filled by these skills.  This model fit well well organizations treasured stability and order over flexibility and responsiveness.  But an organization that needs to pivot and reorient itself as conditions arise will ask employees to be ready to assume a range of roles as required.

How does an organization become aware of the potential hidden away within its employees?

I look out this afternoon and see an audience, about whom I know next to nothing.  There are deep reservoirs of knowledge and experience in this room, reservoirs that extend well beyond your core skills in payroll and human resources.  But I can’t see any of it.  I have no idea what we could do together, if we had the need.  We probably have enough skills here to create half a dozen world-class organizations.  But I’m flying blind.

You’re not.  Human resources is more than hiring and compliance.  It is an organizational asset, because HR is the keeper of the human inventory of skills and experiences.   As an employee interviews for a position and is hired, do you translate their c.v. into a database of expertise?  Do you sit them down for an in-depth interview which would uncover any other strengths they bring into the organization?  Or is this information simply lying dormant, like a c.v. stashed away in a drawer?

The technology to capture organizational skills is already widely deployed.  In many cases you don’t need much more than your normal HR tools.  This isn’t a question of tools, but rather, how those tools get used.  Every HR department everywhere is like a bank vault loaded up with cash and precious metals.  You could just close the vault, leaving the contents to moulder unused.  Or you can take that value and lend it out, making it work for you and your organization.

That’s the power of an HR department which recognizes that business intelligence about the intelligence and expertise within your organization acts like a force multiplier.  A small organization with a strong awareness of its expertise punches far above its weight.  A large organization with no such awareness consistently misses opportunities to benefit from its unique excellence.

You hold the keys to the kingdom, able to unlock a revolution in productivity which can take your organizations to a whole new level of capability.  When anyone in the organization can quickly learn who can help them with a given problem, then reach that person immediately – which they now can, given everyone has a mobile – you have effectively swept away much of the friction which keeps organizations from reaching their full potential.

Consider the tools you already employ.  How can they be opened up to give employees an awareness of the depth of talent within your organization?  How can HR become a switchboard of capabilities, connecting those with needs to those who have proven able to meet those needs?  How can a manager gain a quick understanding of all of the human resources available throughout the organization, so that a pivot becomes an effortless moment of transition, not a chaotic chasm of confusion.

This is the challenge for the organizations of the 21st century.  We have to learn how to become flexible, fluid, responsive and mobile.  We have to move from ignorance into awareness.  We have to understand that the organization as a whole benefits from an expanded awareness of itself.  We have to do these things because newer, more nimble competitors will force these changes on us.   Organizations that do not adapt to the workforce and organizational movements toward flexibility and fluidity will be battered, dazed and confused, staggering from crisis to crisis.  Better by far to be on the front foot, walking into the future with a plan to unleash the power within our organizations.

Posted in Uncategorized | 1 Reply

Why We Wiki


When I was a young man, I was obsessed by computers. I remember perfectly the first time I sat at a keyboard – at a “line printing” terminal, which had an endless sheet of paper spooling through it – to play a game of “Star Trek”. The fascination I felt at that moment has never really ended, nor the sense of wonder, or the desire to dive in and learn everything about this seemingly magical machinery. My timing was excellent; within a few years the first “microcomputers”, such as the Tandy TRS-80, came onto the market at affordable prices, and I could plumb the guts of computing with my very own machine.

This was incredibly fortuitous, because I was not a good student at University; or rather, I excelled at some classes and completely failed others. I had not yet learned the discipline to apply myself to unpleasant tasks (even today, nearly thirty years later, it presents difficulties), so my grades were a perfect reflection of my obsessions. If something interested me, I got As. Otherwise, well, my transcript speaks for itself. The University noted this as well, and politely asked me to “get lost” for a few years, until I had acquired the necessary discipline to focus on my education. That marked the end of my formal education, but that doesn’t mean I stopped learning. Far from it.

From my earliest years, I have been a sponge for information; my parents bought me the World Book Encyclopedia when I was six – twenty red-and-black leather-bound volumes, full of photographs and illustrations – and by the time I was eight, I’d read the whole thing. (I hadn’t memorized it, but I had read through it.) Once I discovered computers, I devoured anything I could find on the subject, in particular the January 1975 issue of Popular Electronics, which featured the MITS Altair 8800 – the world’s first microcomputer – on its cover. I dived in, learning everything about microcomputers: how they worked, how to program them, what they could be used for, until I had one of my own. Then I learned everything about that computer (the Tandy TRS-80), its CPU (the Zilog Z-80), experimented with programming it in BASIC and assembly language, becoming completely obsessive about it.

When I found myself tossed out of University, my obsession quickly turned into a job offer programming Intel 8080 systems (very similar to the Z-80), which led to a fifteen-year career as a software engineer, for which I was well paid, and within a field where my lack of University degrees in no way hindered my professional advancement. In the 1980s, nearly everyone working within microcomputing was an autodidact; almost none of these people had completed a university degree. I had the fortune to work with a few truly brilliant programmers in my earliest professional years, who mentored me in best programming practices. I learned from their own expertise as they transferred their wealth of experience and helped me to make it my own.

It is said that programming is more of a craft than a profession, in that it takes years of apprenticeship, working under masters of the craft, to reach proficiency. This is equally true of most professions: medicine, the law, even (or perhaps, especially) such arcana as synthetic chemistry. At its best, post-graduate education is a mentorship process which wires the obsessions of the apprentice to the wisdom of the master. The apprentice proposes, the master disposes, until the apprentice surpasses the master. The back-and-forth informs both apprentice and master; both are learning, though each learn different things.


Everyone is an expert. From a toddler, expert in the precarious balance of towering wooden blocks, to a nanotechnologist, expert in the precarious juxtaposition of atom against atom, everyone has some field of study wherein they excel – however esoteric or profane it might seem to the rest of us. The hows and whys of this are essential to human nature; we’re an obsessive species, and our obsessions can form around almost any object which engages our attentions. Most of these obsessions seem completely natural, in context: a Pitjandjara child learns an enormous amount about the flora and fauna of the central Australian desert, knows where to find water and shade, can recite the dreamings which place her within the greater cosmos. In the age before agriculture, all of us grew up with similar skills, each of us entirely obsessed with the world around us, because within that obsession lay the best opportunity for survival. Those of our ancestors who were most apt with obsession (up to a point) would thrive even in the worst of times, passing those behaviors (some genetic, some cultural) down through time to ourselves. But obsession is not a vestigial behavior; the entire bedrock of civilization is built upon it: specialization, that peculiar feature of civilization, where each assumes a particular set of duties for the whole, is simply obsession by another name.

A century ago, Jean Piaget realized that small children are obsessed with the physics of the world. Piaget watched as his own children struggled, inchoate, with elaborate hypotheses of causality, volume, and difference, constantly testing their own theories of how the world works, an operation as intent as any performed in the laboratory.

Language acquisition is arguably the most marvelous of all childish obsessions; in the space of just a few years – coincident with developments in the nervous system – the child moves from sonorous babbling into rich, flexible, meaningful speech – a process which occurs whether or not explicit instruction is given to the child. In fact, the only way to keep a child from learning language is to separate them from the community of other human beings. Even the banter of adults is enough for a child to grow into language.

Somewhere between early childhood and early adulthood the thick rope of obsession unwinds to a few mere threads. Most of us are not that obsessive, most of the time, or rather, our obsessions have shifted from the material to the immaterial. Adolescent girls become obsessive people-watchers, huddling together in cliques whose hierarchies and connections are so rich and so obscure as to be worthy of any hermetic cult. This process occurs precisely at the time their highest brain functions are realized, when they become acutely aware of the social networks within which they operate. Physics pales into insignificance when weighed against the esteem (or contempt) of one’s peers. This, too, is a survival mechanism: women, as the principle caregivers, need strong social networks to ensure that their offspring are well-cared for. Women who obsessively establish and maintain strong social deliver their children a decisive advantage in life, and so pass this behavior along to their children. Or so the thinking goes.


Mentoring is an embodied relationship, and does not scale beyond individuals. The sharing of expertise, on the other hand, has grown hand in hand with the printing press, the broadcast media, and the Web. Publishing and broadcasting both act as unintentional gatekeepers on the sharing of expertise; the costs of publishing a book (or magazine, or pamphlet), and the costs of broadcast spectrum set a lower limit on what specific examples of individual expertise make the transition into the public mind. For every Julia Child or Nigella Lawson, there are a thousand cooks who produce wonders from their kitchens; for every Simon Schama or David Halberstam, there are a thousand historians (most of whom are not white English-speakers) spinning tales of antiquity. These voices were lost to us, because they could not negotiate the transition into popularity. This is should not be read as a flat assessment of quality, but as a critique of the function of the market. Mass markets thrive on mass tastes; the specific is sacrificed for the broad. Yet the specific is often far more significant to the individual, containing within itself the quality of salience. Salience – that which is significant to us – is driven by our obsessions; things are salient because we are obsessed by them. The “salience gap” between the expertise delivered by the marketplace, and the burning thirst for knowledge of obsessed individuals has finally collapsed with the introduction of the Wiki.

At its most essential, a Wiki is simply a web page that is editable within a Web browser. While significant, that is not enough to explain why Wikis have unlocked humanity’s hidden and vast resources of expertise. That you can edit a web page in situ is less important than the goal of the editor. It took several years before it occurred to anyone that the editor could use a Wiki to share expertise. However, once that innovation occurred, it was rapidly replicated throughout the Internet on countless other Wikis.

Early in this process, Wikipedia launched and began its completely unexpected rise into utility. In some ways, Wikipedia has an easy job: as an encyclopedia it must provide a basic summary of facts, not a detailed exploration of a topic, and it is generally possible for someone with a basic background in a topic to provide this much information. Yet this critique overlooks the immense breadth of Wikipedia (as of this writing, nearly 2.3 million articles in its English-language version). By casting its net wide – inviting all experts, everywhere, to contribute their specific knowledge – not only has Wikipedia covered the basics, it’s also covering everything else. No other encyclopedia could hope to be as comprehensive as Wikipedia, because no group of individuals – short of the billion internet-connected individuals who have access to Wikipedia – could be so comprehensively knowledgeable on such a wide range of subjects.

Wikipedia will ever remain a summary of human knowledge; that is its intent, and there are signs that the Wikipedians are becoming increasingly zealous in their enforcement of this goal. Summaries are significant and important (particularly for the mass of us who are casually interested in a particular topic), but summaries do not satisfy our obsessive natures. Although Wikipedia provides an outlet for expertise, it does not cross the salience gap. This left an opening for a new generation of Wikis designed to provide depth immersion in a particular obsession. (Jimmy Wales, the founder of Wikipedia, realized this, and created Wikia.com as a resource where these individuals can create highly detailed Wikis.)

While one individual may have an obsession, it takes a community of individuals, sharing their obsession, to create a successful Wiki. No one’s knowledge is complete, or completely accurate. To create a resource useful to a broader community – who may not be as deeply obsessed – this “start-up” community must pool both their expertise and their criticism. Beginnings are delicate times, and more so for a Wiki, because obsessive individuals too often tie their identity to their expertise; questioning their expertise is taken as a personal affront. If the start-up community can not get through this first crisis, the Wiki will fail.

Furthermore, it takes weeks to months to get a sufficient quantity of expertise into a Wiki. A Wiki must reach “critical mass” before it has enough “gravitational attraction” to lure other obsessive individuals to the Wiki, where it is hoped they will make their own edits and additions to it. Thus, the start-up phase isn’t merely contentious, it’s also thankless – there are few visible results for all of the hard work. If the start-up community lacks discipline in equal measure to their forbearance, the Wiki will fail.

Given these natural barriers, it’s a wonder that Wikis ever succeed. The vast majority of Wikis are stillborn, but those which do succeed in attracting the attentions of the broader community of obsessive individuals cross the salience gap, and, in that lucky moment, the Wiki begins to grow on its own, drawing in expertise from a broad but strongly-connected social network, because individuals obsessed with something will tend to have strong connections to other similar individuals. Very quickly the knowledge within the community is immensely amplified, as knowledge and expertise pours out of individual heads and into the Wiki.

This phenomenon – which I have termed “hyperintelligence” – creates a situation where the community is smarter as a whole (and as individuals) because of their interactions with the Wiki. In short, the community will be more effective in the pursuit of its obsession because of the Wiki, and this increase in effectiveness will make them more closely bound to the Wiki. This process feeds back on itself until the idea of the community without the Wiki becomes quite literally unthinkable. The Wiki is the “common mind” of the community; for this reason it will be contentious, but, more significantly, it will be vital, an electronic representation of the power of obsession, an embodied form of the community’s depth of expertise.

What this community does with its newfound effectiveness is the open question.

That Business Conversation

Case One: Lists

I moved to San Francisco in 1991, because I wanted to work in the brand-new field of virtual reality, and San Francisco was the epicenter of all commercial development in VR. The VR community came together for meetings of the Virtual Reality Special Interest Group at San Francisco’s Exploratorium, the world-famous science museum. These meetings included public demonstrations of the latest VR technology, interviews with thought-leaders in the field, and plenty of opportunity for networking. At one of the first of those meetings I met a man who impressed me by his sheer ordinariness. He was an accountant, and although he was enthusiastic about the possibilities of VR, he wasn’t working in the field – he was simply interested in it. Still, Craig Newmark was pleasant enough, and we’d always engage in a few lines of conversation at every meeting, although I can’t remember any of these conversations very distinctly.

Newmark met a lot of people – he was an excellent networker – and fairly quickly built up a nice list of email addresses for his contacts, whom he kept in contact with through a mailing list. This list, known as “Craig’s List”, because a de facto bulletin board for the core web and VR communities in San Francisco. People would share information about events in town, or observations, or – more frequently – they’d offer up something for sale, like a used car or a futon or an old telly.

As more people in San Francisco were sucked into the growing set of businesses which were making money from the Web, they too started reading Craig’s List, and started contributing to it. By the middle of 1995, there was too much content to be handled neatly in a mailing list, so Newmark – who, like nearly everyone else in the San Francisco Web community, had some basic web authoring skills – created a very simple web site which allowed people to post their own listings to the Web site. Newmark offered this service freely – his way of saying “thank you” to the community, and, equally important, his way of reinforcing all of the social relationships he’d built up in the last few years.

Newmark’s timing was excellent; Craigslist came online just as many, many people in San Francisco were going onto the Web, and Craigslist quickly became the community bulletin board for the city. Within a few months you could find a flat for rent, a car to drive, or a date – all in separate categories, neatly organized in the rather-ugly Web layout that characterized nearly all first-generation websites. If you had a car to sell, a flat to sublet, or you wanted a date – you went to Craigslist first. Word of mouth spread the site around, but what kept it going was the high quality of the transactions people had through the site. If you sold your bicycle through Craigslist, you’d be more likely to look there first if you wanted to buy a moped. Each successful transaction guaranteed more transactions, and more success, and so on, in a “virtuous cycle” which quickly spread beyond San Francisco to New York, Los Angeles, Seattle, and other well-connected American cities.

From the very beginning, everything on Craigslist was freely available – it nothing to list an item or to view listings. The only thing Newmark ever charged for was job listings – one of the most active areas on Craigslist, particularly in the heyday of the Web bubble. Jobs listings alone paid for all of the rest of the operational costs of Craigslist – and left Newmark with a healthy profit, which he reinvested into the business, adding capacity and expanding to other cities across America. Within a few years, Newmark had a staff of nine people, all working out of a house in San Francisco’s Sunset District – which, despite its name, is nearly always foggy.

While I knew about Craigslist – it was hard not to – I didn’t use it myself until 2000, when I left my professorial housing at the University of Southern California. I was looking for a little house in the Hollywood Hills – a beautiful forested area in the middle of the city. I went onto Craigslist and soon found a handful of listings for house rentals in the Hollywood Hills, made some calls and – within about 4 hours – had found the house of my dreams, a cute little Swiss cottage that looked as though it fell out of the pages of “Heidi”. I moved in at the beginning of June 2000, and stayed there until I moved to Sydney in 2003. It was perhaps the nicest place I’d ever lived, and I found it – quickly and efficiently – on Craigslist. My landlord swore by Craigslist; he had a number of properties, scattered throughout the Hollywood Hills, and always used Craigslist to rent his properties.

In late 2003, when I first came to Australia on a consulting contract – and before I moved here permanently – I used Craigslist again, to find people interested in sub-letting my flat while I worked in Sydney. Within a few days, I had the couple who’d created Dora the Explorer – a very popular children’s television show – living in my house, while they pursued a film deal with a major studio. When I came back to Los Angeles to settle my affairs, I sold my refrigerator on Craigslist, and hired a fellow to move the landlord’s refrigerator back into my flat – on Craigslist.

In most of the United States, Craigslist is the first stop for people interested in some sort of commercial transaction. It is now the 65th busiest website in the world, the 10th busiest in the United States – putting it up there with Yahoo!, Google, YouTube, MSN and eBay – and has about nine billion page views a month. None of the pages have advertising, nor are there any charges, except for job listings (and real estate listings in New York to keep unscrupulous realtors from flooding Craigslist with duplicate postings). Although it is still privately owned, and profits are kept secret, it’s estimated that Craigslist earns as much as USD $150 million from its job listings – while, with a staff of just 24 people, it costs perhaps a few million a year to keep the whole thing up and running. Quite a success story.

But everything has a downside. Craigslist has had an extraordinary effect on the entire publishing industry in North America. Newspapers, which funded their expensive editorial operations from the “rivers of gold” – car advertisements, job listings and classified ads – have found themselves completely “hollowed out” by Craigslist. Although the migration away from print to Craigslist began slowly, it has accelerated in the last few years, to the point where most people, in most circumstances will prefer to place a free listing in Craigslist than a paid listing in a newspaper. The listing will reach more people, and will cost them nothing to do so. That is an unbeatable economic proposition – unless you’re a newspaper.

It’s estimated that upwards of one billion dollars a year in advertising revenue is being lost to the newspapers because of Craigslist. This money isn’t flowing into Craig Newmark’s pocket – or rather, only a small amount of it is. Instead, because the marginal cost of posting an ad to Craigslist is effectively zero, Newmark is simply using the disruptive quality of pervasive network access to completely undercut the newspapers, while, at the same time, providing a better experience for his customers. This is an unbeatable economic proposition, one which is making Newmark a very rich man, even while it drives the Los Angeles Times ever closer to bankruptcy.

This is not Newmark’s fault, even if it is his doing. Newmark had the virtue of being in the right place (San Francisco) at the right time (1995) with the right idea (a community bulletin board). Everything that happened after that was driven entirely by the community of Craigslist’s users. This is not to say that Newmark isn’t incredible responsive to the needs of the Craigslist community – he is, and that responsiveness has served him well as Craigslist has grown and grown. But if Newmark hadn’t thought up this great idea, someone else would have. Nothing about Craigslist is even remotely difficult to create. A fairly ordinary web designer would be able to duplicate Craigslist’s features and functionality in less than a week’s worth of work. (But why bother? It already exists.) Newmark was servicing a need that no one even knew existed until after it had been created. Today, it seems perfectly obvious.

In a pervasively networked world, communities are fully empowered to create the resources they need to manage their lives. This act of creation happens completely outside of the existing systems of commerce (and copyright) that have formed the bulwarks of industrial age commerce. If an entire business sector gets crushed out of existence as a result, it’s barely even noticed by the community. This incredible empowerment – which I term “hyperempowerment” – is going to be one of the dominant features of public life in the 21st century. We have, as individuals and as communities, been gifted with incredible new powers – really, almost mutant ‘super powers’. We use them to achieve our own ends, without recognizing that we’ve just laid a city to waste.

Craigslist has not taken off in Australia. There are Craigslist sites for the “five capital cities” of Australia, but they’re only very infrequently visited. And, because they are only infrequently visited, they haven’t been able to build up enough content or user loyalty to create the virtuous cycle which has made Craigslist such a success in the United States. Why is this? It could be that the Trading Post has already got such a hold on the mindset of Australians that it’s the first place they think to place a listing. The Trading Post’s fees are low (fifty cents for a single non-car item), and it’s widely recognized, reaches a large community, etc. So that may be one reason.

Still, organizations like Fairfax and NEWS are scared to death of Craigslist. Back in 2004, Fairfax Digital launched Cracker.com.au, which provides free listings for everything except cars and jobs, which point back into the various paid advertising Fairfax websites. Australian newspaper publishers have already consigned classified advertising to the dustbin of history; they’re just waiting for the axe to fall. When it does, the Trading Post – among the most valuable of Testra/Sensis properties – will be almost entirely worthless. Telstra’s stockholders will scream, but the Australian public at large won’t care – they’ll be better served by a freely available resource which they’ve created and which they use to improve their business relations within Australia.

Case Two: Listings

In order to preserve business confidentiality, I won’t mention the name of my first Australian client, but they’re a well-known firm, publishers of traveler’s guides. The travel business, when I came to it in early 2006, was nearly unchanged from its form of the last fifty years: you send a writer to a far-away place, where they experience the delights and horrors of life, returning home to put it all into a manuscript which is edited, fact-checked, copy-edited, typeset, published and distributed. Book publishing is a famously human-intensive process – it takes an average of eighteen months for a book from a mainstream publisher to reach the marketplace, because each of these steps take time, effort and a lot of dollars. Nevertheless, a travel guide might need to be updated only twice a decade, and with global distribution it has always been fairly easy to recover the investment.

When I first met with my client, they wanted to know what might figure into the future of publishing. It turns out they knew the answer better than I did: they quickly pointed me to a new website, TripAdvisor.com. Although it is a for-profit website – earning money from bookings made through it – the various reviews and travel information provided on TripAdvisor.com are “user generated content,” that is, provided by folks who use TripAdvisor.com. Thus, a listing for a particular hotel will contain many reviews from people who have actually stayed at the hotel, each of whom have their own peccadilloes, needs, and interests. Reading through a handful of the reviews for any given hotel will give you a fairly rounded idea of what the establishment is really like.

This model of content creation and distribution is the exact opposite of the time-honored model practiced by travel publishers. Instead of an authoritative reviewer, the reviewing task is “crowdsourced” – literally given over to the community of users – to handle. The theory is that with enough reviews, some cogent body of opinion would emerge. While this seems fanciful on the face of it, it’s been proven time and again that this is an entirely successful model of knowledge production. Wikipedia, for example, has built an entire and entirely authoritative encyclopedia from user contributions – a body of knowledge far larger and at least as accurate as its nearest competitor, Encyclopaedia Britannica.

It’s still common for businesses to distrust user generated content. Movie studios nicknamed it “loser generated content”, even as their audiences turn from the latest bloated blockbuster toward YouTube. Britannica pooh-poohed Wikipedia , until an article in Nature, that bastion of scientific reporting, indicated that, on average, a Wikipedia article was nearly as accurate as a given article in Britannica. (This report came out in December 2005. Today, it’s likely an article in Wikipedia would be more accurate than an article in Britannica.) In short, businesses reject the “wisdom of crowds” at their peril.

We’ve only just discovered that a well-networked body politics has access to deep reservoirs of very specific knowledge; in some peculiar way, we are all boffins. We might be science boffins, or knitting boffins, or gearheads or simply know everything that’s ever been said about Stoner Rock. It doesn’t matter. We all have passions, and now that we have a way of sharing these passions with the world-at-large, this “collective intelligence” far outclasses the particulars of any professional organization seeking to serve up little slices of knowledge. This is a general challenge confronting all businesses and institutions in the 21st century. It’s quite commonplace today for a patient to walk into a doctor’s surgery knowing more about the specifics of an illness than the doctor does; this “Wikimedicine” is disparaged by medical professionals – but the truth is that an energized and well-networked community generally does serve its members better than any particular professional elite.

So what to do about about travel publishing in the era of TripAdvisor.com, and WikiTravel (another source of user-generated tourist information), and so on. How can a business possibly hope to compete with the community it hopes to profitably serve? When the question is put like this, it seems insoluable. But that simply indicates that the premise is flawed. This is not an us-versus-them situation, and here’s the key: the community, any community, respects expertise that doesn’t attempt to put on the airs of absolute authority. That travel publisher has built up an enormous reservoir of goodwill and brand recognition, and, simply by changing its attitude, could find a profitable way to work with the community. Publishers are no longer treated like Moses, striding down from Mount Sinai, commandments in hand. Publishing is a conversation, a deep engagement with the community of interest, where all parties are working as hard as they can to improve the knowledge and effectiveness of the community as a whole.

That simple transition from shoveling books out the door, into a community of knowledge building, has far reaching consequences. The business must refashion its own editorial processes and sensibilities around the community. Some of the job of winnowing the wheat from the chaff must be handed to the community, because there’s far too much for the editors to handle on their own. Yet the editors must be able to identify the best work of the community, and give that work pride of place, in order to improve the perceived value their role within the community.

Does this mean that the travel guide book is dead? A book is not dynamic or flexible, unlike a website. But neither does a book need batteries or an internet connection. Books have evolved through half a millennium of use to something that we find incredibly useful – even when resources are available online, we often prefer to use books. They are comfortable and very portable.

The book itself may be changing. It may not be something that is mass produced in lots of tens of thousands; rather, it may be individually printed for a community member, drawn from their own needs and interests. It represents their particular position and involvement, and is thus utterly personal. The technology for single-run publishing is now widespread; it isn’t terribly to print a single copy of a book. When that book can reflect the best editorial efforts of a brand known for high-quality travel publications plus the very best of the reviews and tips offered by an ever-growing community of travelers, it becomes something greater than the sum of its parts, a document in progress, an on-going evolution toward greater utility. It is an encapsulation of a conversation at a particular moment in time, necessarily incomplete, but, for that reason, intensely valuable.

Conversation is the mode not just for business communications, but for all business in the 21st century. Businesses which can not seize on the benefits of communication with the communities they serve will simply be swept aside (like newspapers) by communities in conversation. It is better to be in front of that wave, leading the way, than to drown in the riptide. But this is not an easy transition to make. It involves the fundamental rethinking of business practices and economic models. It’s a choice that will confront every business, everywhere, sometime in the next few years.

Case Three: Delisted

My final case study involves a recent client of mine, a very large university in New South Wales. I was invited in by the Director of Communications, to consult on a top-down redesign of the university’s web presence. After considerable effort an expenditure, the university had learned that their website was more-or-less unusable, particularly when compared against its competitors. It took users too many clicks to find the information they wanted, and that information wasn’t collated well, forcing visitors to traverse the site over and over to find the information they might want on a particular program of study. The new design would streamline the site, consolidate resources, and help prospective students quickly locate the information they would need to make their educational decisions.

That was all well and good, but a cursory investigation of web usage at the university indicated a larger and more fundamental problem: students had simply stopped using the online resources provided by the university, beyond the bare minimum needed to register for classes. The university had failed to keep up with innovations in the Web, falling dramatically out-of-step with its student population, who are all deeply engaged in emailing, social networking, blogging, photo sharing, link sharing, video sharing, and crowdsourcing. Even more significantly, the faculty of the university had set up many unauthorized web sites – using university computing resources – to provide web services that the university had not been able to offer. Both students and faculty had “left the farm” in search of the richer pastures found outside the carefully maintained walls of university computing. This collapse in utility has led to a “vicious cycle,” for the less the student or faculty member uses university resources, the less relevant they become, moving in a downward spiral which eventually sees all of the important knowledge creation processes of the university happening outside its bounds.

As the relevant information about the university (except what the university says about itself) escapes the confines of university resources, another serious consequence emerges: search engines no longer put the university at the top of search queries, simply because the most relevant information about the university is no longer hosted by the university. The organization has lost control of the conversation because it neglected to stay engaged in that conversation, tracking where and how its students and faculty were using the tools at hand to engage themselves in the processes of learning and knowledge formation. A Google search on a particular programme at the university could turn up a student’s assessment of the program as the first most relevant result, not the university’s authorized page.

This is a bigger problem than the navigability of a website, because it directly challenges the university’s authority to speak for itself. In the United States, the website RateMyProfessors.com has become the bane of all educational institutions, because students log onto the site and provide (reasonably) accurate information about the pedagogical capabilities of their instructors. An instructor who is a great researcher but a lousy teacher is quickly identified on this site, and students steer clear, having learned from their peers the pitfalls of a bad decision. On the other hand, students flock to lectures by the best lecturers, and these professors become hot items, either promoted to stay in place, or lured away by strong counter-offers. The collective intelligence of the community is running the show now, and that voice will only become stronger as better tools are developed to put it to work.

What could I offer as a solution for my client? All I could do was proscribe some bitter medicine. Yes, I told them, go forward with the website redesign – it is both necessary and useful. But I advised them to use that redesign as a starting point for a complete rethink of the services offered by the university. Students should be able to blog, share media, collaborate and create knowledge within the confines of the university, and it should be easier to do that – anywhere – than the alternative. Only when the grass is greener in the paddock will they be able to bring the students and faculty back onto the farm.

Furthermore, I advised the university to create the space for conversation within the university. Yes, some of it will be defamatory, or vile, or just unpleasant to hear. But the alternative – that this conversation happens elsewhere, outside of your ability to monitor and respond to it – would eventually prove catastrophic. Educational institutions everywhere – and all other institutions – are facing similar choices: do they ignore their constituencies or engage with them? Once engaged, how does that change the structure and power flows within their institutions? Can these institutions reorganize themselves, so that they become more permeable, pliable and responsive to the communities which they serve?

One again, these are not easy questions to answer. They touch on the fundamental nature of institutions of all varieties. A commercial organization has to confront these same questions, though the specifics will vary from organization to organization. The larger an organization grows, the louder the cry for conversation grows, and the more pressing its need. The largest institutions in Australia are most vulnerable to this sudden change in attitudes, because here it is most likely that sudden self-organizations within the body politic will rise to challenge them.

Conclusion: Over?

As you can see, the same themes appear and reappear in each of these three case studies. In each case some industry sector or institution confronts a pervasively networked public which can out-think, out-maneuver and massively out-compete an institution which formed in an era before the rise of the network. The balance of power has shifted decisively into the hands of the networked public.

The natural reaction of institutions of all stripes is to resist these changes; institutions are inherently conservative, seeking to cling to what has worked in the past, even if the past is no longer any guide to the future. Let me be very clear on this point: resistance is futile, and worse, the longer you resist, the stronger the force you will confront. If you attempt to dam up the tide of change, you will only ensure that the ensuing deluge will be that much greater. The pressure is rising; we are already pervasively networked in Australia, with nearly every able adult owning a mobile phone, with massive and growing broadband penetration, and with an increasing awareness that communities can self-organize to serve their own needs.

Something’s got to give. And it’s not going to be the public. They can’t be whipped or cowed or forced back into antique behaviors which no longer make sense to them. Instead, it is up to you, as business leaders, to embrace the public, engaging them in a continuous conversation that will utterly transform the way you do business.

No business is ever guaranteed success, but unless you embrace conversation as the essential business practice of the 21st century, you will find someone else, more flexible and more open, stealing your business away. It might be a competitor, or it might be your customers themselves, fed up with the old ways of doing business, and developing new ways to meet their own needs. Either way, everything is about to change.

Qui Bono?


Two months ago, at a Los Angeles dinner party, AOL Vice President Jason Calacanis found himself seated next to Wikipedia founder Jimmy Wales. The two talked about the emerging culture of Web2.0, the changes in copyright and ownership, and – this has been a particular topic of concern for Wales over the past months – what kinds of copyrights Wales would purchase and place into the public domain if he had a spare hundred million dollars. Wales doesn’t have this kind of money (Wikipedia relies on donations from its grateful users to keep its servers up and running) but lately he’s been very publicly asking this “What if?” question.

Calacanis had a quick come-back. “If you just put a banner ad on the front page of Wikipedia,” he said, “you’d be able to earn a hundred million dollars a year.” Wikipedia, as the 15th most-visited site on the Web, could easily earn the type of advertising revenue that Google, Yahoo!, MSN and only a few others generate. Although Wales reacted to the suggestion with a mixture of shock and horror, Calacanis pressed his point. “C’mon, Jimmy, you’re just leaving that money on the table! Heck, let me do it, and AOL will give you the hundred million dollars – every year! Imagine what kind of copyrights you could purchase with that kind of money!”

Calacanis blogged this conversation later, wholly enjoying his Mephistophelian role, while blithely ignoring the ethical implications of his offer to Wales. If Wales owned Wikipedia – owned the copyrights, the site, the servers, the infrastructure, the employees, etc. – he could accept Calacanis’ temping offer. But that choice is not Wales’ to make, because the ownership of Wikipedia does not fit neatly into any category of property thus far constituted. Wikipedia articles are published under a Creative Commons license: this means that anyone can use them freely for non-commercial purposes. It also means that all contributors freely surrender the commercial rights to their work. Yet Wikipedia does not suffer from the “tragedy of the commons.” Wikipedia is not the kind of resource that can be exhausted when too many people try to use it. Quite the opposite: as more people use and contribute to Wikipedia, the more valuable it becomes to all its users. The commons is an essential feature of Wikipedia’s success, and that success forever places it outside of the commercial mechanisms which Calacanis advocates. The suggestion of placing advertising in Wikipedia, while immensely attractive, also amounts to a category error. It means you’ve so completely missed the point you might as well be speaking another language.


People who don’t fight over anything else do fight over money. Money (particularly in the United States) is so fraught, so overloaded with meaning, that it nearly always evokes some sort of neurotic reaction. Money means survival. Money means freedom. Money means choice. It may not buy happiness, but, as Mae West once remarked, “I’ve been poor, and I’ve been rich, and rich is better.” Money is so intensely evocative that we have been forced to develop elaborate and relatively fool-proof systems to handle it. Banks and other financial institutions exist precisely because people are rarely rational with their own money: these institutions serve as the collective superego we employ when confronted with choices about money. That these institutions – such as BCCI, or Arthur Andersen – periodically abandon these principles in the pursuit of profit indicates the huge gravitational strength of wealth.

Social scientists and neuropsychologists have recently begun to test the human drive to wealth. One of the most significant findings – released just a few months ago – indicates that we each have an innate sense of fairness in every financial transaction, and we’re more than willing to walk away from a transaction which we deem unfair. Furthermore, we’re willing to punish others for perpetrating those transactions. This cognitive “center of fairness” is one of the last areas of the brain to develop fully – it marks the final stage of adulthood, appearing reliably in adults after about age 22. This means our sense of fairness draws upon many of the foundational cognitive structures of the brain, which help us to understand value, social ranking, need, and so forth. Only when these systems are in place can we develop a notion of fairness. And if any of these systems fail – as does happen, on occasion – psychologists can predict an individual’s descent into psychopathology. Being fair is perhaps our highest cognitive achievement as individuals, and thus – quite rightly – it is marked as the beginning of wisdom.

All of Western civilization balances between the unbounded desire for wealth and the curbing sense of fairness. Gordon Gekko proclaims, “Greed is good. Greed works,” and we nod in agreement, while at the same time we cheer as Enron founder Key Lay gets convicted for obeying Gekko’s dictates. Our civilizational schizophrenia about money naturally reflects our internal, psychological natures: an old part of our brains wishes to survive, and thrive, while a much newer part recognizes that the best chances for survival come through sharing our resources fairly. Even where the value of sharing can be conclusively demonstrated – as in the case of Wikipedia – the reptile-brain (in this case, wearing Jason Calacanis’ face) pokes on through, arguing that a resource conserved can be turned to great profit.

All of this means that as we acquire an ever-greater wealth of shared resources – Wikipedia is only one example – the pressure will constantly increase to earn a profit from them. We can no sooner abandon our reptile-brain instincts than we can stop breathing – indeed, they’re more closely correlated than most people realize. Instead, we must settle for the next best thing: a arrangement of contracts enforced by law (which carry the threat of the State) and social consensus (which carry the threat of ostracism). Neither technique is wholly up to the task, but together they will provide a stabilizing influence.


Sharing information carries its own costs and rewards. Much of the work of arbitrageurs draws from some “inside information,” which, were it widely known, would rectify the market inequity the arbitrageur profits from. Thus, there are some situations where sharing presents such a great threat to profit that the drive to fairness is effectively silenced. In most other situations, the sharing of information confers benefit both on the individual offering up the information and the community which receives the information. Individuals identified as experts in a particular area gain in social standing within their communities; this is a form of wealth in itself, and though less tangible than cash, should never be discounted. This social calculus serves as the foundation for many communities, and it is both delicate and constantly in flux: members in every social network are constantly jockeying for position by sharing, aggregating, or critiquing the information.

When the wealth of a community leaves that community – when it is committed to print, or licensed out a commercial organization – problems immediately arise. The first of these is the question of authorship: is the creator of the information being recognized as the author the work? If so, the social calculus of expertise expands into a new sphere. If not, it will feel like theft. Next comes the question of money: who profits from the work of another? Qui bono? If the host of the community takes the content generated by that community and realizes profit from that content, the creators of that content will immediately be afflicted with a number of conflicting feelings. Assuming that attribution has been passed along, there is no loss in social standing. But to see someone else making profit from work freely shared strikes at the very heart of fairness. More significantly, this problem will not be solved simply by offering content creators a license fee for their content. They’re not in it for the money. They are not professionals. Their motivations have everything to do with the sharing of expertise in a context that is all about social standing and not about commerce. Mixing these diametrically opposed influences will quickly result in a spiraling series of crises, leading inevitably to the collapse of the community, once its members realize that they’re being “ripped off.”

The only possible solution that would satisfy both the desire to share and the desire for profit relies on a persistent transparency of motives. The host must enter into a negotiated agreement with the members of the community which sets all ground rules for the use of community-generated content. Furthermore, these agreements must be negotiated on an individual basis, so that every participant in a community has the ability to opt-in or opt-out of the exterior financial arrangements of the community. This doesn’t make the situation any less fraught, because financial motives will still come into conflict with the intent of the community, but it does ensure that everyone understands and accepts the rules before they participate in the process of knowledge creation. That will go a long way toward keeping tempers cool when conflicts arise.

All of this conflict was predicted twenty five years ago by the iconoclastic Ted Nelson, the founding philosopher of hypertext. In his groundbreaking book Literary Machines, Nelson described a publishing system – “transclusion” – which preserved copyright for all content created on his global hypertext system (in Nelson’s case, Project Xanadu, rather than the World Wide Web, though they are nearly equivalent). When any user of Xanadu viewed a document – from whatever source – the system would note the owner of the copyright of that document, and credit them appropriately using a micropayment system. Xanadu preserved copyright and access to information. What Nelson did not foresee – emerging only after hypertext became widespread – are true peer-production systems, such as Wikipedia.

Although Wikipedia does record who created what, when, and where, this record is so richly threaded that nearly every significant article in Wikipedia has been subjected to countless revisions, additions and deletions. It is possible to know who created any article in Wikipedia, but it is also pointless. Wikipedia is contribution atomized, reduced down to words and phrases which, out of context, bear no value. Only in its collective whole does it have value; that value can not be licensed or sold without being viewed as the most obvious sort of theft. Projects which rely on peer-production inevitably come to resemble Wikipedia’s atomization of contribution. While this represents a legal and ethical minefield, making such systems nearly worthless commercially, it is also the surest metric of success. The more something is shared, the more valuable it becomes. But that doesn’t mean you can sell it.

Posted in Uncategorized | 1 Reply

Herding Cats

That which governs least governs best. – Thomas Paine
I.Nothing is perfect. Everything, this side of heaven, contains a flaw. The master rug makers of Persia go so far as to add a mistaken stitch into their carpets; perfection would be an insult to the greatness of God. For nearly everything else, and for nearly everyone else, we don’t have to worry about adding errors: we work from incomplete knowledge, we work from ignorance, and we work from prejudice. As Mark Twain noted, “It’s not what you don’t know that’ll hurt you, but what you know that ain’t so.” We believe we know so much; in truth we know nearly nothing at all. We have trouble discerning our own motivations – yet we constantly judge the motivations of others. Cognitive scientists have repeatedly demonstrated how we backfill our own memories to create a comfortable and pleasing narrative of our lives; this keeps us from drowning in despair, but it also allows us to be monsters who have no trouble sleeping soundly at night.

We constantly and impudently impugn the motives of others, carrying that attitude into the designs of systems which support community. We protect children from pedophiles; we protect ourselves from unsolicited emails; we protect communities from the excesses of emotion or behavior which – we believe – would rip them apart. Each of these filtering processes – many of them automated – serve to create a “safe space” for conversation and community. Yet community is at least as much about difference as it is about similarity. If every member of a community held to a unity of thought, no conversation would be possible; information – “the difference which makes a difference” – can only emerge from dissent. Any system which diminishes difference therefore necessarily diminishes the vitality of a community. Every act of communication within a community is both an promise of friendship and a cry to civil war. Every community sails between the Scylla and Charybdis of undifferentiated assent and complete fracture.

When people were bound by proximity – in their villages and towns – the pressure for the community to remain cohesive prevented most of the egregious separations from occurring, though periodically – and particularly since the Reformation – communities have split apart, divided on religious or ideological lines. In the post-Enlightenment era, with the opening of the Americas, divided communities could simply move away, and establishing their own particular Edens, though these too could fracture; schism follows schism in an echo of the Biblical story of the Confusion of Tongues. Rural communities could remain singular and united (at least, until they burst apart under the build up of pressures), but urban polities had to move in another direction: tolerance. Amsterdam and London flourished in the eighteenth century because of the dissenting voices they tolerated in their streets. It was either that, or, as both had learned – to their horror – endless civil wars. This essential idea of the Enlightenment – that men could keep their own counsel, so long as they respected the beliefs of others – fostered democracy, science, capitalism and elevated millions from misery and poverty. It is said that democratic nations never wage war against one another; while not entirely true, tolerance acts as a firewall against the most immediate passions of states. The alternative – repeated countless times throughout the 20th century – is a mountain of skulls.

Where people are connected electronically, freed both from the strictures of proximity and the organic and cultural bounds of propriety that accompany face-to-face interactions (it is much easier to be rude to someone that you’ve never met in the flesh) the natural tendency to schism is amplified. The checks against bad behavior lose their consequential quality. One can be rude, abrasive, even evil, because the mountain of skulls which pile up as the inevitable result of such psychopathology appear to lack the immediacy of a real, bleeding body. It has been argued that we need “to be excellent to each other,” or that we need to grow thicker skins. Both suggestions have some merit, but the truth lies somewhere in between.


While USENET, the thirty year old, Internet-wide bulletin board system remains the archetype for online community – the place where the terms “flame”, “flame war” and “troll” originated in their current, electronic usages – USENET has been long since been obsolesced by a million dedicated websites. We can learn a lot about the pathology of online communities by studying USENET, but the most important lesson we can draw involves the original online schism. In 1987, John Gilmore – one of the founding engineers of SUN Microsystems – wanted to start a USENET list to discuss topics related to illegal psychoactive drugs. USENET users must approve all requests for new lists, and this highly polarizing topic, when put to a vote, was repeatedly rejected. Gilmore spent a few hours modifying the USENET code so that it could handle a new top-level hierarchy, “alt.*” This was designed to be the alternative to USENET, where anyone could start a list for any reason, anywhere. While many USENET sites tried to ban the alt. hierarchy from their servers, within a year’s time alt. became ubiquitously available. Everyone on USENET had a passion for some list which couldn’t be satisfied within its strict guidelines. To this day, the tightly moderated USENET and free-wheeling, often obscene, and frequently illegal alt. hierarchy coexist side-by-side. Each has reinforced the existence of the other.

Qualities of both USENET and the alt. hierarchy have been embodied in the peer-produced encyclopedia-about-everything, Wikipedia. Like the alt. hierarchy, anyone can create an entry on any subject, and anyone can edit any entry on any subject (with a few exceptions, discussed below). However, like USENET, there are Wikipedia moderators, who can choose to delete entries, or roll back the edits on entry, and who act as “governors” – in the sense that they direct activity, rather than ruling over it (this from the original Greek kybernetes, from which we get “cybernetics,” and meaning “steersman”). By any objective standard the system has worked remarkably well; Wikipedia now has nearly 1.5 million English-language articles, and continues growing at a nearly exponential rate. The strength of the moderation in Wikipedia is that it is nearly invisible; although articles do get deleted because they do not meet Wikipedia’s evolving standards (e.g., the first version of a biographical page about myself) it remains a triumph of tolerance, carefully maintaining a laissez-faire approach to the creation of content, applying a moderating influence only when the broad guidelines of Wikipedia (summed up in the maxim “don’t be a dick”) have been obviously violated. The community feels that it has complete control over the creation of content within Wikipedia, and this sense of investment – that Wikipedia truly is the product of the community’s own work – has made Wikipedia’s contributors its most earnest evangelists.

There is a price to be paid for this open-door policy: noise. Because Wikipedia is open to all, it can be vandalized, or filled with spurious information. While the moderators do their best to correct instances of vandalism, Wikipedia relies on the community to do this nitpicking work. (I have deleted vandalism on Wikipedia pages several times.) For the most part, it works well, though there are specific instances – such as on 31 July 2006, when Steven Colbert urged viewers of his television program to modify Wikipedia entries to promote his own “political” views – when it falls down utterly. Wikipedia can withstand the random assaults of individuals, but, in its present form, it can not hope to stand against thousands of individuals intent on changing its content in specific areas. Thus, in certain circumstances, Wikipedia moderators will “lock” certain entries, allowing them to be modified only by carefully designated individuals. Although Colbert meant his assault as a stunt, with no malicious intent, he pointed to the serious flaw of all open-door systems – they rely on the good faith of the vast majority of their users. If any polity decides to take action against Wikipedia, the system will suffer damage.

With a growing consciousness of the danger of open-door systems – and a sense that perhaps more moderation is better – Wikipedia cofounder Larry Sanger has launched his own competitor to Wikipedia, Citizendium. Starting with a “fork” of Wikipedia (that is, a selection of the entries thought “suitable” for inclusion in the new work), Citizendium will restrict posting in its entries to trusted experts in their fields. The goal is to create a higher-quality version of Wikipedia, with greater involvement from professional researchers and academics.

While a certain argument can be made that Wikipedia entries contain too much noise –many are poorly written, have no references, or even project a certain point-of-view – it remains to be seen if any differentiation between “professional” and “amateur” communities of knowledge production can be maintained in an era of hyperdistribution. If a film producer is now threatened by the rise of the amateur – that is, an enthusiast working outside the established systems of media distribution – won’t an academic (and by extension, any professional) also be under threat? The academy has always existed for two reasons: to expand knowledge, and to restrict it. Academic communities function under the same rules of all communities, the balancing act between uniformity and schism. The “standard bearers” in any community reify the orthodox tenets of any field, blocking the research of any outsiders whose work might threaten the functioning assumptions of the community. Yet, since T.S. Kuhn published The Structure of Scientific Revolutions we know that science progresses (in Max Planck’s apt phrasing) “funeral by funeral.” Experts tend to block progress in a field; by extension any encyclopedia which uses these same experts as the gatekeepers to knowledge aquisition will effectively hamstring itself from first principles. In the age of hyperintelligence, expertise has become a broadly accessible quality; it is not located in any particular community, but rather in individuals who may not be associated with any official institution. Noise is not the enemy; it is a sign of vitality, and something that we must come to accept as part of the price we pay for our newly-expanded capabilities. As Kevin Kelly eloquently expressed in Out of Control, “The perfect is the enemy of the good.” The question is not whether Wikipedia is perfect, but rather, is it good enough? If it is – and that much must be clear by now – then Citizendium, as an attempt to make perfect what is already good enough, must be doomed to failure, out of tune with the times, fighting the trend toward the era of the amateur.

As Citizendium flowers and fails over the next year, it will be interesting to note how its community practices change in response to an ever-more-dire situation. The pressures of the community will force Citizendium to become more Wikipedia-like in its submissions and review policies. At the same time, additional instances of organized vandalism (we’ve only just started to see these) will drive Wikipedia toward a more restrictive submissions and editing policy. Citizendium overshot the mark from the starting line, and will need to crawl back toward the open-door policy, yet, as it does, it risks alienating the same experts it’s designed to defend. Wikipedia, starting from a position of radical openness, has only restricted access in response to some real threat to its community. Citizendium is proactive and presumes too much; Wikipedia is reactive (and for this reason will occasionally suffer malicious damage) but only modifies its access policies when a clear threat to the stability of the community has been demonstrated. Wikipedia is an anarchic horde, moving by consensus, unlike Citizendium, which is a recapitulation of the top-down hierarchy of the academy. While some will no doubt treasure the heavy moderation of Citizendium, the vast majority will prefer the noise and vitality of Wikipedia. A heavy hand versus an invisible one; this is the central paradox of community.


A well-run online community walks a narrow line between anarchy and authoritarianism. To encourage discussion and debate, a community must be encouraged to sit on a hand grenade that always threatens to explode, but never quite manages to go off. In general, it’s quite enough to put people into the same conversational space, and watch the sparks fly; stirring the pot is rarely necessary. Conversely, when the pot begins to boil over, someone has to be on hand to turn the heat down. Communities frequently manage this process on their own, with cool minds ready to reframe conversation in less inflammatory terms. This wisdom of communities is not innate; it is knowledge embodied within a community’s practices, something that each community must learn for itself. USENET lists, over the course of thirty years, have learned how to avoid the most obvious hot-button topics, and regular contributors to these lists have learned to filter out the outrageous flame-baiting of list trolls. But none of this community intelligence resides in a newly-founded community, so, in an absolute sense, the long-term health of any community depends strongly on the character and capabilities of its earliest members.

The founding members of a new community should not be arbitrarily selected; that would be gambling on the good behavior of individuals who, insofar as the community is concerned, have no track record. Instead, these founders need to be carefully vetted across two axes of significance: their ability to be provocative, and their capability to act like adults. These qualities usually don’t come as a neat package; any individual who has a surfeit of one is more than likely to be lacking in the other. However, once such “balanced” individuals have been identified and recruited, the community can begin its work.

After a time, the best of these individuals – whose qualities will become clear to the rest of the community – should be promoted to moderator status, assuming the Solomonic mantle as protectors and guardians of the community. This role is vital; a community should always know that they are functioning in a moderated environment, but this moderation should be so light-handed as to be nearly invisible. The presumption of observation encourages individuals to behave appropriately; the rare examples when a moderator is forced to act as a benevolent and trustworthy force for good should encourage imitation.

Hand-in-hand with the sense of confidence which comes from careful and gentle moderation, a community must feel empowered to create something that represents both their individual and collective abilities. The idea of “ownership,” when multiplied by a community-recognized sense of expertise, produces a strongly reinforcing behavior. Individuals who are able to share their expertise with a community – and help the community build its own expertise – will develop a very strong sense of loyalty to the community. Expertise can be demonstrated in the context of a bulletin board system, but these systems do not easily adapt themselves to the total history of interactions experts have within the community. A posting made today is lost in six months’ time; a Wiki is forever. Thus, in addition to conversation – and growing naturally from it – the community should have the tools at its disposal to translate its conversation into something more permanent. Community members will quickly recognize those within its ranks who have the authority of expertise on any given subject, and they should be gently guided into making a record of that expertise. As that record builds, it develops a value of its own, beyond its immense value as a repository of expertise; it becomes the living embodiment of an individual’s dedication to the community. Over time, community members will come to see themselves as the true “content” of the community, both through their participation in the endless conversation of the community, and as the co-creators of the community’s collective intelligence.

This model has worked successfully for over a decade in some of the more notable electronic communities – particularly in the open-source software movement. The various communities around GNU/Linux, PHP and Python have all demonstrated that any community with room enough to pool the expertise of large numbers of dedicated individuals will build something of lasting value, and bring broad renown to its key contributors, moderators, and enthusiasts.

However, even in the most effective communities, schism remains the inevitable human tendency, and some conflicts can not always be resolved, drawn from deep-seated philosophical or temperamental differences. Schism should not be embraced arbitrarily, but neither should it be avoided at all costs. Instead – as in the case of the alt. hierarchy – room should be made to accommodate the natural tendency to differentiate. Wikipedia will eventually fork into a hundred major variants, of which Citizendium is but the first. The LINUX world has been divided into different distributions since its earliest years. Schism is a sign of life, indicating that there is something important enough to fight over. Schisms either resolve in an ecumenical unity, or persist and continue to divide; neither outcome is inherently preferable.

Every living thing struggles between static order and chaotic dissolution; it isn’t perfect, but then, nothing ever is. Even as we feel ourselves drawn to one extreme or another, wisdom wrought from experience (often painfully gained) checks our progress, and guides us forward, delicately, into something that is, in the best of worlds, utterly unexpected. The potential for novelty in any community is enormous; releasing that potential requires flexibility, balance, and presence. There are no promises of success. Like a newborn child, a new community is all potential – unbounded, unbridled, standing at the cusp of a unique wonder. We can set its feet on the path of wisdom; what comes after is unknowable, and, for this reason, impossibly potent.

Posted in Uncategorized | 1 Reply

I Am Not Your Google


It’s become difficult to locate an individual who hasn’t become expert in one way or another. The culture of expertise goes hand-in-hand with the culture of fame: you can become renown for your own expertise, or you can bask in a reflected glow as you plumb the depths of another’s accomplishments. In either case, the result is the same: you become a deep well of knowledge about something – vital or trivial. Assessments of the value of expertise are broadly subjective: I may not care that you know all about Corvette Sting-Rays, or permafrost ecosystems, or currency fluctuations in Bolivia, but someone else almost certainly does.

While we pursue our own expertise to satisfy the designs of our own desires, there is always a second element in play: we want to be needed and valued for what we know. We achieve social standing within our social networks by providing instruments of value – acts and services – which reinforce our utility to the membership of the network. The greater the instrument of value, the higher the social standing, hence there is a constant pressure to deliver ever-higher value, a pressure that is placed on all members of a social network. Because of that pressure, the social network is constantly in motion, as members within the network gain or fall in standing, according to their perceived value. The operating principle is analogous to that old Hollywood saw, “You’re only as big as your next film.”

The greater your social standing, the greater the pressure to introduce instruments of value to your social network. This is an evolutionary arms race of sorts, because it eventually becomes impossible to outperform expectations; everything naturally reaches its own level. Members at the higher levels of a social network suffer from the “Burden of Omniscience” – because they know so much, they become the “go-to” member for the network. Inquiries requiring great expertise are invariably forwarded up to the members thought to be most competent to address them. While this reinforces the hierarchy of the social network, it also means that the hierarchy’s most expert members are also spending more and more of their time addressing the inquiries drawn from layers underneath them. Time is the only zero-sum quantity in human experience: time spent answering inquiries can’t be invested in extending expertise. Success carries within it the seeds of failure, for to the degree that any member of a social network becomes essential, to that same degree they will be hamstrung by the demands of the network. Yet, to refuse inquiries from the network carries another cost: every refusal decreases your utility to the social network. Turn down enough invitations to dance, and soon you’ll find yourself without any suitors.


I am known as an expert in the field of computing. The recognition of this expertise has its smaller consequences: when I visit my friends and family I tend to perform the sorts computer maintenance tasks that they might find too difficult. Because I can perform these tasks, I feel as though I must; and because others hold me in esteem for my expertise, there is a pressure to perform to expectations. However, in the age of connected humanity, this pressure to perform is no longer bounded by proximity; anyone can ask for my help, anywhere, anytime. And quite often they do.

I have friends who are very competent with computers – more competent than 95% of the population – who nonetheless encounter questions that they can’t answer, or problems they can’t solve. These issues invariably make their way to me. Hardly a day goes by without one (or several) emails or IMs coming my way with obscure computing questions. Some of these questions are easily addressed and immediately answered. Others are harder to answer. This is where I start to feel the conflict between my desire to assert my expertise and my desire to extend it.

This week I have been writing computer programs, something I used to do on a daily basis, but which I now restrict to a few, carefully-chosen weeks every year. It’s very intense work, which requires a level of concentration and focus which, in my own experience, is entirely unique. Conforming to the dictates of the computer-as-medium requires a dedicated suppression of various aspects of my mind and my personality. Programming, at this level – which is to say that I’m inventing something that has never existed before this – is a meditation of sorts, which requires a certain quality of mind. This quality is precisely anti-social – not misanthropy, but rather an almost autistic disinterest in the human world. When you embrace the soul of the machine, there’s little room for anything else. Yet that embrace is the only way to extend the expertise for which I have gained some modicum of renown. This essential paradox drove me to make an ironic reply to one friend, in response to his inquiry. “I AM NOT YOUR GOOGLE,” I wrote, but, even as I typed the words, I realized that I was lying.


This is the one recognizably universal quality of the present moment: we have surfeit of information. It’s gone beyond “information at your fingertips,” the hacker dream of twenty years ago, to “information, anywhere, anytime, about anything.” Where first Google indexed all the pages of the Web, thereby allowing us to surf sensibly, Wikipedia delivered the corpus of human experience, presenting it in depth. Both of these tools have become absolutely indispensable; and both are the first stops for anyone looking for some bit of knowledge. Google is raw, unfiltered data; Wikipedia is cooked, condensed, and formatted for human cognition. Google and Wikipedia: information and knowledge. Yet even these two are not enough. Information and knowledge are theory without practice; when knowledge is embodied in practice, it transforms into understanding. Practice is a uniquely human task, so understanding is a uniquely human quality. It can’t be written down – that simply translates understanding back into knowledge. Understanding must be imparted through a direct transmission of experience. That’s what mentoring is all about.

If one knew everything about everything, but could not express that knowledge usefully, nor mentor others into an understanding of that knowledge, that knowledge would have low instrumental value. Individuals rise in social networks because they can translate their expertise into understanding. Our social networks of expertise represent the natural emergence of a human strategy to grow into a more comprehensive understanding. In this they represent a technique which arguably dates from the emergence of language. Once it was possible to impart understanding linguistically, social networks of expertise were the inevitable result. Thus you have the cults, mystery schools and guilds of the ancient and mediaeval worlds.

My address book reflects my own network of expertise; I have a list of individuals whom I know I can contact – at any time – if I have a burning question that must be answered. Different individuals have different areas of expertise, and I will forward the inquiry to the individual that I deem most likely – given their weight in my social network – to provide a satisfactory answer. Some of these inquiries will be answered immediately; others will go long weeks before I receive a reply. If an inquiry goes too long without a reply, that individual falls in value within my own social network. And we’re all like this. We all do this, all the time. We are no longer bounded by proximity, so we have come to expect immediate replies to our inquiries. This also means that we’re expected to reply immediately to any inquiries which make their way to us.

The era of pervasive electronic communication acts as an intense amplifier for our social networks. We are harnessing our social networks to deliver understanding, on demand. This represents an enormous opportunity to increase our effectiveness: understanding must necessarily translate into effectiveness. Yet there is an enormous opportunity cost. To be an effective member of a modern social network means that we are buffeted from all sides, with everyone wanting us to share what we’ve got. This tension is already becoming one of the dominant features of 21st-century life; we’re constantly struggling to demonstrate our expertise and extend that expertise – only to find we can’t do both simultaneously. It’s a recipe for frustration.

What we’ll see now – as frustration levels increase – are the emergence of tools and techniques to manage the constant irritations of our newly amplified social networks. Frustration creates the friction which powers the engine of human creativity. The social networks which develop effective adaptations to this friction – and, perhaps, harness it – will increase their own effectiveness, incorporating the understandings gained into their own bodies of expertise. We are learning how to Google one another, and, in so doing, we are opening ourselves to an exploration in depth of the human universe of understanding.

Going into Syndication


Content. Everyone makes it. Everyone consumes it. If content is king, we are each the means of production. Every email, every blog post, every text message, all of these constitute production of content. In the earliest days of the web this was recognized explicitly; without a lot of people producing a lot of content, the web simply wouldn’t have come into being. Somewhere toward the end of 1995, this production formalized, and we saw the emergence of a professional class of web producers. This professional class asserts its authority over the audience on the basis of two undeniable strengths: first, it cultivates expertise; second, it maintains control over the mechanisms of distribution. In the early years of the Web, both of these strengths presented formidable barriers to entry. As we emerged from the amateur era of “pages about kitty-cats” into the branded web era of CNN.com, NYT.com, and AOL.com, the swarm of Internet users naturally gravitated to the high-quality information delivered through professional web sites. The more elite (and snobbish) of the early netizens decried this colonization of the electronic space by the mainstream media; they preferred the anarchic, imprecise and democratic community of newsgroups to the imperial aegis of Big Media.

In retrospect, both sides got it wrong. There was no replacement of anarchy for order; nor was there any centralization of attention around a suite of “portal” sites, though, for at least a decade it seemed precisely this was happening. Nevertheless, the swarm has a way of consistently surprising us, of finding its way out of any box drawn up around it. If, for a period of time, it suited the swarm to cozy up to the old and familiar, this was probably due more to habit than to any deep desire. When thrust into the hyper-connected realm of the Web, our natural first reaction is to seek signposts, handholds against the onrush of so much that clamors about its own significance. In cyberspace you can implicitly trust the BBC, but when it comes to The Smoking Gun or Disinformation, that trust must be earned. Still, once that trust has been won, there would be no going back. This is the essence of the process of media fragmentation. The engine that drives fragmentation is not increasing competition; it is increasing familiarization with the opportunities on offer.

We become familiar with online resources through “the Three Fs”. We find things, we filter them, we forward them along. Social networks evolve the media consumption patterns which suit themselves best; this is often not highly correlated with the content available from mainstream outlets. Over time, social networks tend to favor the obscure over the quotidian, as the obscure is the realm of the cognoscenti. This trend means that this fragmentation is both inevitable and bound to accelerate.

Fragmentation spreads the burden of expertise onto a swarm of nanoexperts. Anyone who is passionate, intelligent, and willing to make the attention investment to master the arcana of a particular area of inquiry can transform themselves into a nanoexpert. When a nanoexpert plugs into a social network that values this expertise (or is driven toward nanoexpertiese in order to raise their standing within an existing social network), this investment is rewarded, and the connection between nanoexpert and network is strongly reinforced. The nanoexpert becomes “structurally coupled” with the social network – for as long as they maintain that expertise against all competitors. This transformation is happening countless times each day, across the entire taxonomy of human expertise. This is the engine which has deprived the mainstream media of their position of authority.

While the net gave every appearance of centralization, it never allowed for a monopoly on distribution. That house was always built on sand. But the bastion of expertise, this took longer to disintegrate. Yet it has, buffeted by wave after wave of nanoexperts. With the rise of the nanoexpert, mainstream media have lost all of their “natural” advantages, yet they still have considerable economic, political and popular clout. We must examine how they could translate this evanescent power into something which can survive the transition to world of nanoexperts.


While expertise has become a diffuse quality, located throughout the cloud of networked intelligence, the search for information has remained essentially unchanged for the past decade. Nearly everyone goes to Google (or a Google equivalent) as a first stop on a search for information. Google uses swarm intelligence to determine the “trust” value of an information source: the most “trusted” sites show up as the top hits on Google’s Page Rank. Thus, even though knowledge and understanding have become more widespread, the path toward them grows ever more concentrated. I still go to the New York Times for international news reporting, and the Sydney Morning Herald for local news. Why? These sources are familiar to me. I know what I’m going to get. That means a lot, because as the number of possible sources reaches toward infinity, I haven’t the time or the inclination to search out every possible source for news. I have come to trust the brand. In an era of infinite choice, a brand commands attention. Yet brands are being constantly eroded by the rise of the nanoexpert; the nanoexpert is persuaded by their own sensibility, not subject to the lure of a well-known brand. Although the brand may represent a powerful presence in the contemporary media environment, there is very little reason to believe this will be true a decade or even five years hence.

For this reason, branded media entities need to make an accommodation with the army of nanoexperts. They have no choice but to sue for peace. If these warring parties had nothing to offer one another, this would be a pointless enterprise. But each side has something impressive to offer up in a truce: the branded entities have readers, and the nanoexperts are constantly finding, filtering and forwarding things to be read. This would seem to be a perfect match, but for one paramount issue: editorial control. A branded media outlet asserts (with reason) that the editorial controls developed over a period of years (or, in the case of the Sydney Morning Herald, centuries) form the basis of a trust relationship with its audience. To disrupt or abandon those controls might do more than dilute the brand – they could quickly destroy it. No matter how authoritative a nanoexpert might be, all nanoexpert contributions represent an assault upon editorial control, because these works have been created outside of the systems of creative production which ensure a consistent, branded product. This is the major obstacle that must be overcome before nanoexperts and branding media can work together harmoniously.

If branded media refuse to accept the ascendancy of nanoexperts, they will find themselves entirely eroded by them. This argument represents the “nuclear option”, the put-the-fear-of-God-in-you representation of facts. It might seem completely reasonable to a nanoexpert, but appears entirely suspect to the branded media, seeing only increasing commercial concentration, not disintegration. For the most part, nanoexperts function outside systems of commerce; their currency is social standing. Nanoexpert economies of value are invisible to commercial entities; but that does not mean they don’t exist. If we convert to a currency of attention – again, considered highly suspect by branded media – we can represent the situation even more clearly: more and more of the audience’s attentions are absorbed by nanoexpert content. (This is particularly true of audiences under 25 years old, who have grown to maturity in the era of the Web.)

The point can not be made more plainly, nor would it do any good to soften the blow: this transition to nanoexpertise is inexorable – this is the ad-hoc behavior of the swarm of internet users. There’s only one question of any relevance: can this ad-hoc behavior be formalized? Can the systems of production of the branded media adapt themselves to an era of “peer production” by an army of nanoexperts? If branded media refuse to formalize these systems of peer production, the peer production communities will do so – and, in fact, many already have. Sites such as Slashdot, Boing Boing, and Federated Media Publishing have grown up around the idea that the nanoexpert community has more to offer microaudiences than any branded media outlet. Each of these sites gets millions of visitors, and while they may not match the hundreds of millions of visitors to the major media portals, what they lack in volume they make up for in their multiplicity; these are successful models, and they are being copied. The systems which support them are being replicated. The means of fragmentation are multiplying beyond any possibility of control.


A branded media outlet can be thought of as a network of contributors, editors and publishers, organized around the goal of gaining and maintaining audience attention. The first step toward an incorporation of peer production into this network is simply to open the gates of contribution to the army of nanoexperts. However, just because the gates to the city are open does not mean the army will wander in. They must be drawn in, seduced by something on offer. As commercial entities, branded media can offer to translate the coin of attention into real currency. This is already their function, so they will need to make no change to their business models to accommodate this new set of production relationships.

In the era of networks, joining one network to another is as simple as establishing the appropriate connections and reinforcing these connections by an exchange of value which weights their connections appropriately. Content flows into the brand, while currency flows toward the nanoexperts. This transition is simple enough, once editorial concerns have been satisfied. The issues of editorial control are not trivial, nor should they be sublimated in the search for business opportunities; business have built their brand around an editorial voice, and should seek only to associate with those nanoexperts who understand and are responsive to that voice. Both sides will need to be flexible; the editorial voice must become broader without disintegrating into a common yowl, while the nanoexperts must put aside the preciousness which they have cultivated in search of their expertise. Both parties surrender something they consider innate in order to benefit from the new arrangement: that’s the real nature of this truce. It may be that some are unwilling to accommodate this new state of affairs: for the branded media, it means the death of a thousand cuts; for the nanoexpert it means they will remain confined to communities where they have immense status, but little else to show for it. In both cases, they will face the competition of these hybrid entities, and, against them neither group can hope to triumph. After a settling-out period, these hybrid beasts, drawing their DNA from the best of both worlds, will own the day.

What does this hybrid organization deliver? At the moment, branded media deliver a broad range of content to a broad audience, while nanoexperts deliver highly focused content to millions of microaudiences. How do these two pieces fit together? One of the “natural” advantages of branded media organizations springs from a decades-long investment in IT infrastructure, which has historically been used to distribute information to mass audiences. Yet, surprisingly, branded media organizations know very little about the individual members of their audience. This is precisely the inverse of the situation with the nanoexpert, who knows an enormous amount about the needs and tastes of the microaudience – that is, the social networks served by their expertise. Thus, there needs to be another form of information exchange between the branded media and the nanoexpert; it isn’t just the content which needs to be syndicated through the branded outlet, but the microaudiences themselves. This is not audience aggregation, but rather, an exploration in depth of the needs of each particular audience member. From this, working in concert, the army of nanoexperts and the branded media outlet can develop tools to deliver depth content to each audience member.

This methodology favors process over product; the relation between nanoexpert, branded media, and audience must necessarily co-evolve, working toward a harmony where each is providing depth information in order to improve the capabilities of the whole. (This is the essence of a network.) Audience members will assume a creative role in the creation of a “feed” which serves just themselves, and, in this sense, each audience member is a nanoexpert – expert in their own tastes.

The advantages of such a system, when put into operation, make it both possible and relatively easy to deliver commercial information of such a highly meaningful nature that it can no longer be called “advertising” in any classic sense of the word, but rather, will be considered a string of “opportunities.” These might include job offers, or investment opportunities, or experiences (travel & education), or the presentation of products. This is Google’s Ad Words refined to the utmost degree, and can only exist if all three parties to this venture – nanoexpert, branded media, and audience members – have fully invested the network with information that helps the network refine and deliver just what’s needed, just when it’s wanted. The revenue generated by a successful integration of commerce with this new model of syndication will more than fuel its efforts.

When successfully implemented, such a methodology would produce an enviable, and likely unassailable financial model, because we’re no longer talking about “reaching an audience”; instead, this hybrid media business is involved in millions of individual conversations, each of which evolves toward its own perfection. Individuals imbedded in this network – at any point in this network – would find it difficult to leave it, or even resist it. This is more than the daily news, better than the best newspaper or magazine ever published; it is individual, and personal, yet networked and global. This is the emerging model for factual publishing.

Posted in Uncategorized | 1 Reply

The Three Fs


We all live in hierarchies. That’s our curse as primates. And always, ever always, status has been conferred on those in the know. To be among the cognoscenti is to possess a mystique, an allure, which confers authority and commands a high position in human hierarchies. Marketers understand this. Advertisers understand this. Now it’s time the rest of us learn it. Or rather, it’s time that we learn that this is the main force driving our social networks.


If you want to understand the emergent behavior of “always-on” users, observe what they already do. The ad-hoc techniques developed by the swarm of network users to manage the avalanche of media inevitably become the automated techniques of tomorrow. The first and most important of these emerging techniques is known today as “link sharing,” but the simplicity of this term belies its significance. In order to understand how important link sharing is, we must take a look at the ad-hoc behavior which it formalizes.

If the surveys are to be believed, we each spend up to two hours a day working with our electronic mail. Some of this electronic mail is dedicated to the minutiae of our business lives – meetings, planning, and execution of commercial activities – but, for many of us, it is also a continuously reinforced connection to our social networks of friends and family. Some of this correspondence is the simple reaffirmation of contact, but, increasingly, these emails contain little more than a URL to some piece of network-accessible content, be it a web page, or an MP3 audio file, or a video. They’re good for a few minute’s diversion, and if we like what we see, we’re bound to pass it along.

This seems an innocuous activity, but it is the essence of the new era of the Internet. The entire idea of “viral” distribution of media is predicated on this behavior. If you’ve seen JibJab’s “This Land” – as eighty million people already have – or that video of two Chinese university students singing a Backstreet Boys tune, or the inarguably odd video of the exploding body of a beached whale, you have participated in viral distribution. Every joke forwarded – that being the first example of this phenomenon – forged an emergent web of social connections.

Social networks, flexible and dynamic, constantly reconfigure themselves based on the perceived value of relationships of each member within the network to every other member. Laid out against this is another metric: expertise. One friend may be an omniscient source of information on IT issues, while another might be expert in dance culture, another, television, and so on. No one connection absorbs all of the attention within a social network; in an ideal situation, everyone contributes something utterly unique, drawn from their own strengths. Furthermore, because our digital selves are all fundamentally egomaniacs, clamoring for attention, recognition, and ascendancy in the social hierarchy, we’re constantly competing for attention within our social networks, each constantly trying to outdo one another, with the newest, hippest, coolest thing. This constant struggle to maintain our position in an ever-changing social order produces a kind of selection pressure – not unlike biological evolution – that quickly winnows winners from losers. (Tabloid newspapers have been fighting this same battle for a hundred and fifty years, but now the capability – and, consequently, the conflict – has become pervasive.)


What are the observable characteristics of this behavior? It breaks down into three basic domains of activity:

A) Finding. A successful competitor for our limited attention knows how to find just those bits of information which are sure to excite interest. These individuals have deep knowledge in narrow fields – we might call them “nanoexperts.” A nanoexpert maintains connections into their community of interest; that’s their passion, and the wellspring of their capability.

B) Filtering. An expert absorbs a lot of information, and much of it is judged to be of little value – perhaps even annoying – to the social network which the expert serves. An expert knows how to judge not just the quality of information, but its relevance. This activity is not automatable; while Google can tell you if a website is popular, based on the number of links into it, Google can’t digest a tidbit of data and tell you if it’s of any significance. Salience is a characteristic of sapience. A good filter – like a good editor – improves the quality of information by cutting it down to size. (Sites like Digg, where users vote articles into front-page relevance, represent an attempt to automate filtering. But Digg displays no real expertise, and won’t until it dissolves into an ever-increasing folksonomy of baby Diggs.)

C) Forwarding. Once something has been found, once it has been weighed, it needs to be distributed. This is perhaps the most difficult (and most social) part of the process. We could easily blast everything we find to everyone we know. But we’d make a lot of enemies in the process, and destroy our rank in every social network. Instead, we dole out expertise parsimoniously, choosing where and when to reveal it, in whatever manner best supports and extends our social standing. Cognoscenti maintain their value in a social network as much by withholding information as by revealing it.

These three activities – the “Three Fs” of finding, filtering and forwarding – scaled up to the swarm of a billion Internet users, describe the world we see today. This is more than the “death of marketing,” more than a world where a few “cool-hunters” detect and amplify the trends of the mass culture. In this new social order, there is no mass market, no mass media, and no mass mind: instead, there are networks of experts, each feeding into collective networks of knowledge, social networks which both within themselves, and, pitted against each other, struggle to raise their standing in the world.

As we move into a world where these ad-hoc techniques become formalized, supported by tools such as del.icio.us, Flock and – perhaps most significantly – Yahoo!, these link-sharing networks will become the individualized equivalent of the mainstream media. More and more of our precious attention is being taken up by content that’s been forwarded to us, and every day, in every way, we’re getting better at finding, filtering and forwarding. How the media industries of the present day – predicated on mass communication to mass audiences – negotiate the transition into a world of microaudiences, each fiercely guarded by an army of ever-vigilant nanoexperts, remains an open question.