Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

The Alexandrine Dilemma

I: Crash Through or Crash

We live in a time of wonders, and, more often than not, remain oblivious to them until they fail catastrophically. On the 19th of October, 1999 we saw such a failure. After years of preparation, on that day the web-accessible version of Encyclopedia Britannica went on-line. The online version of Britannica contained the complete, unexpurgated content of the many-volume print edition, and it was freely available, at no cost to its users.

I was not the only person who dropped by on the 19th to sample Britannica’s wares. Several million others joined me – all at once. The Encyclopedia’s few servers suddenly succumbed to the overload of traffic – the servers crashed, the network connections crashed, everything crashed. When the folks at Britannica conducted a forensic analysis of the failure, they learned something shocking: the site had crashed because, within its first hours, it had attracted nearly fifty million visitors.

The Web had never seen anything like that before. Yes, there were search engines such as Yahoo! and AltaVista (and even Google), but destination websites never attracted that kind of traffic. Britannica, it seemed, had tapped into a long-standing desire for high-quality factual information. As the gold-standard reference work in the English language, Britannica needed no advertising to bring traffic to its web servers – all it need do was open its doors. Suddenly, everyone doing research, or writing a paper, or just plain interested in learning more about something tried to force themselves through Britannica’s too narrow doorway.

Encyclopedia Britannica ordered some more servers, and installed a bigger pipe to the Internet, and within a few weeks was back in business. Immediately Britannica became one of the most-trafficked sites on the Web, as people came through in search of factual certainty. Yet for all of that traffic, Britannica somehow managed to lose money.

The specifics of this elude my understanding. The economics of the Web are very simple: eyeballs equals money. The more eyeballs you have, the more money you earn. That’s as true for Google as for Britannica. Yet, somehow, despite having one of the busiest websites in the world, Britannica lost money. For that reason, just a few month after it freely opened its doors to the public, Britannica hid itself behind a “paywall”, asking seven dollars a month as a fee to access its inner riches. Immediately, traffic to Britannica dropped to perhaps a hundredth of its former numbers. Britannica did not convert many of its visitors to paying customers: there may be a strong desire for factual information, but even so, most people did not consider it worth paying for. Instead, individuals continued to search for a freely available, high quality source of factual information.

Into this vacuum Wikipedia was born. The encyclopedia that anyone can edit has always been freely available, and, because of its use of the Creative Commons license, can be freely copied. Wikipedia was the modern birth of “crowdsourcing”, the idea that vast numbers of anonymous individuals can labor together (at a distance) on a common project. Wikipedia’s openness in every respect – transparent edits, transparent governance, transparent goals – encouraged participation. People were invited to come by and sample the high-quality factual information on offer – and were encouraged to leave their own offerings. The high-quality facts encouraged visitors; some visitors would leave their own contributions, high-quality facts which would encourage more visitors, and so, in a “virtuous cycle”, Wikipedia grew as large as, then far larger than Encyclopedia Britannica.

Today, we don’t even give a thought to Britannica. It may be the gold-standard reference work in the English language, but no one cares. Wikipedia is good enough, accurate enough (although Wikipedia was never intended to be a competitor to Britannica by 2005 Nature was doing comparative testing of article accuracy) and is much more widely available. Britannica has had its market eaten up by Wikipedia, a market it dominated for two hundred years. It wasn’t the server crash that doomed Britannica; when the business minds at Britannica tried to crash through into profitability, that’s when they crashed into the paywall they themselves established. Watch carefully: over the next decade we’ll see the somewhat drawn out death of Britannica as it becomes ever less relevant in a Wikipedia-dominated landscape.

Just a few weeks ago, the European Union launched a new website, Europeana. Europeana is a repository, a collection of cultural heritage of Europe, made freely available to everyone in the world via the Web. From Descartes to Darwin to Debussy, Europeana hopes to become the online cultural showcase of European thought.

The creators of Europeana scoured Europe’s cultural institutions for items to be digitized and placed within its own collection. Many of these institutions resisted their requests – they didn’t see any demand for these items coming from online communities. As it turns out, these institutions couldn’t have been more wrong. Europeana launched on the 20th of November, and, like Britannica before it, almost immediately crashed. The servers overloaded as visitors from throughout the EU came in to look at the collection. Europeana has been taken offline for a few months, as the EU buys more servers and fatter pipes to connect it all to the Internet. Sometime late in 2008 it will relaunch, and, if its brief popularity is any indication, we can expect Europeana to become another important online resource, like Wikipedia.

All three of these examples prove that there is an almost insatiable interest in factual information made available online, whether the dry articles of Wikipedia or the more bouncy cultural artifacts of Europeana. It’s also clear that arbitrarily restricting access to factual information simply directs the flow around the institution restricting access. Britannica could be earning over a hundred million dollars a year from advertising revenue – that’s what it is projected that Wikipedia could earn, just from banner advertisements, if it ever accepted advertising. But Britannica chose to lock itself away from its audience. That is the one unpardonable sin in the network era: under no circumstances do you take yourself off the network. We all have to sink or swim, crash through or crash, in this common sea of openness.

I only hope that the European museums who have donated works to Europeana don’t suddenly grow possessive when the true popularity of their works becomes a proven fact. That will be messy, and will only hurt the institutions. Perhaps they’ll heed the lesson of Britannica; but it seems as though many of our institutions are mired in older ways of thinking, where selfishness and protecting the collection are seen as a cardinal virtues. There’s a new logic operating: the more something is shared, the more valuable it becomes.

II: The Universal Library

Just a few weeks ago, Google took this idea to new heights. In a landmark settlement of a long-running copyright dispute with book publishers in the United States, Google agreed to pay a license fee to those publishers for their copyrights – even for books out of print. In return, the publishers are allowing Google to index, search and display all of the books they hold under copyright. Google already provides the full text of many books which have an expired copyright – their efforts scanning whole libraries at Harvard and Stanford has given Google access to many such texts. Each of these texts is indexed and searchable – just as with the books under copyright, but, in this case, the full text is available through Google’s book reader tool. For works under copyright but out-of-print, Google is now acting as the sales agent, translating document searches into book sales for the publishers, who may now see huge “long tail” revenues generated from their catalogues.

Since Google is available from every computer connected to the Internet (given that it is available on most mobile handsets, it’s available to nearly every one of the four billion mobile subscribers on the planet), this new library – at least seven million volumes – has become available everywhere. The library has become coextensive with the Internet.

This was an early dream both of the pioneers of the personal computing, and, later, of the Web. When CD-ROM was introduced, twenty years ago, it was hailed as the “new papyrus,” capable of storing vast amounts of information in a richly hyperlinked format. As the limits of CD-ROM became apparent, the Web became the repository of the hopes of all the archivists and bibliophiles who dreamed of a new Library of Alexandria, a universal library with every text in every tongue freely available to all.

We have now gotten as close to that ideal as copyright law will allow; everything is becoming available, though perhaps not as freely as a librarian might like. (For libraries, Google has established subscription-based fees for access to books covered by copyright.) Within another few years, every book within arm’s length of Google (and Google has many, many arms) will be scanned, indexed and accessible through books.google.com. This library can be brought to bear everywhere anyone sits down before a networked screen. This library can serve billions, simultaneously, yet never exhaust its supply of texts.

What does this mean for the library as we have known it? Has Google suddenly obsolesced the idea of a library as a building stuffed with books? Is there any point in going into the stacks to find a book, when that same book is equally accessible from your laptop? Obviously, books are a better form factor than our laptops – five hundred years of human interface design have given us a format which is admirably well-adapted to our needs – but in most cases, accessibility trumps ease-of-use. If I can have all of the world’s books online, that easily bests the few I can access within any given library.

In a very real sense, Google is obsolescing the library, or rather, one of the features of the library, the feature we most identify with the library: book storage. Those books are now stored on servers, scattered in multiple, redundant copies throughout the world, and can be called up anywhere, at any time, from any screen. The library has been obsolesced because it has become universal; the stacks have gone virtual, sitting behind every screen. Because the idea of the library has become so successful, so universal, it no longer means anything at all. We are all within the library.

III: The Necessary Army

With the triumph of the universal library, we must now ask: What of the librarians? If librarians were simply the keepers-of-the-books, we would expect them to fade away into an obsolescence similar to the physical libraries. And though this is the popular perception of the librarian, in fact that is perhaps the least interesting of the tasks a librarian performs (although often the most visible).

The central task of the librarian – if I can be so bold as to state something categorically – is to bring order to chaos. The librarian takes a raw pile of information and makes it useful. How that happens differs from situation to situation, but all of it falls under the rubric of library science. At its most visible, the book cataloging systems used in all libraries represents the librarian’s best efforts to keep an overwhelming amount of information well-managed and well-ordered. A good cataloging system makes a library easy to use, whatever its size, however many volumes are available through its stacks.

It’s interesting to note that books.google.com uses Google’s text search-based interface. Based on my own investigations, you can’t type in a Library of Congress catalog number and get a list of books under that subject area. Google seems to have abandoned – or ignored – library science in its own book project. I can’t tell you why this is, I can only tell you that it looks very foolish and naïve. It may be that Google’s army of PhDs do not include many library scientists. Otherwise why would you have made such a beginner’s mistake? It smells of an amateur effort from a firm which is not known for amateurism.

It’s here that we can see the shape of the future, both in the immediate and longer term. People believe that because we’ve done with the library, we’re done with library science. They could not be more wrong. In fact, because the library is universal, library science now needs to be a universal skill set, more broadly taught than at any time previous to this. We have become a data-centric culture, and are presently drowning in data. It’s difficult enough for us to keep our collections of music and movies well organized; how can we propose to deal with collections that are a hundred thousand times larger?

This is not just some idle speculation; we are rapidly becoming a data-generating species. Where just a few years ago we might generate just a small amount of data on a given day or in a given week, these days we generate data almost continuously. Consider: every text message sent, every email received, every snap of a camera or camera phone, every slip of video shared amongst friends. It all adds up, and it all needs to be managed and stored and indexed and retrieved with some degree of ease. Otherwise, in a few years time the recent past will have disappeared into the fog of unsearchability. In order to have a connection to our data selves of the past, we are all going to need to become library scientists.

All of which puts you in a key position for the transformation already underway. You get to be the “life coaches” for our digital lifestyle, because, as these digital artifacts start to weigh us down (like Jacob Marley’s lockboxes), you will provide the guidance that will free us from these weights. Now that we’ve got it, it’s up to you to tell us how we find it. Now that we’ve captured it, it’s up to you to tell us how we index it.

We have already taken some steps along this journey: much of the digital media we create can now be “tagged”, that is, assigned keywords which provide context and semantic value for the media. We each create “clouds” of our own tags which evolve into “folksonomies”, or home-made taxonomies of meaning. Folksonomies and tagging are useful, but we lack the common language needed to make our digital treasures universally useful. If I tag a photograph with my own tags, that means the photograph is more useful to me; but it is not necessarily more broadly useful. Without a common, public taxonomy (a cataloging system), tagging systems will not scale into universality. That universality has value, because it allows us to extend our searches, our view, and our capability.

I could go on and on, but the basic point is this: wherever data is being created, that’s the opportunity for library science in the 21st century. Since data is being created almost absolutely everywhere, the opportunities for library science are similarly broad. It’s up to you to show us how it’s done, lest we drown in our own creations.

Some of this won’t come to pass until you move out of the libraries and into the streets. Library scientists have to prove their worth; most people don’t understand that they’re slowly drowning in a sea of their own information. This means you have to demonstrate other ways of working that are self-evident in their effectiveness. The proof of your value will be obvious. It’s up to you to throw the rest of us a life-preserver; once we’ve caught it, once we’ve caught on, your future will be assured.

The dilemma that confronts us is that for the next several years, people will be questioning the value of libraries; if books are available everywhere, why pay the upkeep on a building? Yet the value of a library is not the books inside, but the expertise in managing data. That can happen inside of a library; it has to happen somewhere. Libraries could well evolve into the resource the public uses to help manage their digital existence. Librarians will become partners in information management, indispensable and highly valued.

In a time of such radical and rapid change, it’s difficult to know exactly where things are headed. We know that books are headed online, and that libraries will follow. But we still don’t know the fate of librarians. I believe that the transition to a digital civilization will founder without a lot of fundamental input from librarians. We are each becoming archivists of our lives, but few of us have training in how to manage an archive. You are the ones who have that knowledge. Consider: the more something is shared, the more valuable it becomes. The more you share your knowledge, the more invaluable you become. That’s the future that waits for you.

Finally, consider the examples of Britannica and Europeana. The demand for those well-curated collections of information far exceeded even the wildest expectations of their creators. Something similar lies in store for you. When you announce yourselves to the broader public as the individuals empowered to help us manage our digital lives, you’ll doubtless find yourselves overwhelmed with individuals who are seeking to benefit from your expertise. What’s more, to deal with the demand, I expect Library Science to become one of the hot subjects of university curricula of the 21st century. We need you, and we need a lot more of you, if we ever hope to make sense of the wonderful wealth of data we’re creating.

That Business Conversation

Case One: Lists

I moved to San Francisco in 1991, because I wanted to work in the brand-new field of virtual reality, and San Francisco was the epicenter of all commercial development in VR. The VR community came together for meetings of the Virtual Reality Special Interest Group at San Francisco’s Exploratorium, the world-famous science museum. These meetings included public demonstrations of the latest VR technology, interviews with thought-leaders in the field, and plenty of opportunity for networking. At one of the first of those meetings I met a man who impressed me by his sheer ordinariness. He was an accountant, and although he was enthusiastic about the possibilities of VR, he wasn’t working in the field – he was simply interested in it. Still, Craig Newmark was pleasant enough, and we’d always engage in a few lines of conversation at every meeting, although I can’t remember any of these conversations very distinctly.

Newmark met a lot of people – he was an excellent networker – and fairly quickly built up a nice list of email addresses for his contacts, whom he kept in contact with through a mailing list. This list, known as “Craig’s List”, because a de facto bulletin board for the core web and VR communities in San Francisco. People would share information about events in town, or observations, or – more frequently – they’d offer up something for sale, like a used car or a futon or an old telly.

As more people in San Francisco were sucked into the growing set of businesses which were making money from the Web, they too started reading Craig’s List, and started contributing to it. By the middle of 1995, there was too much content to be handled neatly in a mailing list, so Newmark – who, like nearly everyone else in the San Francisco Web community, had some basic web authoring skills – created a very simple web site which allowed people to post their own listings to the Web site. Newmark offered this service freely – his way of saying “thank you” to the community, and, equally important, his way of reinforcing all of the social relationships he’d built up in the last few years.

Newmark’s timing was excellent; Craigslist came online just as many, many people in San Francisco were going onto the Web, and Craigslist quickly became the community bulletin board for the city. Within a few months you could find a flat for rent, a car to drive, or a date – all in separate categories, neatly organized in the rather-ugly Web layout that characterized nearly all first-generation websites. If you had a car to sell, a flat to sublet, or you wanted a date – you went to Craigslist first. Word of mouth spread the site around, but what kept it going was the high quality of the transactions people had through the site. If you sold your bicycle through Craigslist, you’d be more likely to look there first if you wanted to buy a moped. Each successful transaction guaranteed more transactions, and more success, and so on, in a “virtuous cycle” which quickly spread beyond San Francisco to New York, Los Angeles, Seattle, and other well-connected American cities.

From the very beginning, everything on Craigslist was freely available – it nothing to list an item or to view listings. The only thing Newmark ever charged for was job listings – one of the most active areas on Craigslist, particularly in the heyday of the Web bubble. Jobs listings alone paid for all of the rest of the operational costs of Craigslist – and left Newmark with a healthy profit, which he reinvested into the business, adding capacity and expanding to other cities across America. Within a few years, Newmark had a staff of nine people, all working out of a house in San Francisco’s Sunset District – which, despite its name, is nearly always foggy.

While I knew about Craigslist – it was hard not to – I didn’t use it myself until 2000, when I left my professorial housing at the University of Southern California. I was looking for a little house in the Hollywood Hills – a beautiful forested area in the middle of the city. I went onto Craigslist and soon found a handful of listings for house rentals in the Hollywood Hills, made some calls and – within about 4 hours – had found the house of my dreams, a cute little Swiss cottage that looked as though it fell out of the pages of “Heidi”. I moved in at the beginning of June 2000, and stayed there until I moved to Sydney in 2003. It was perhaps the nicest place I’d ever lived, and I found it – quickly and efficiently – on Craigslist. My landlord swore by Craigslist; he had a number of properties, scattered throughout the Hollywood Hills, and always used Craigslist to rent his properties.

In late 2003, when I first came to Australia on a consulting contract – and before I moved here permanently – I used Craigslist again, to find people interested in sub-letting my flat while I worked in Sydney. Within a few days, I had the couple who’d created Dora the Explorer – a very popular children’s television show – living in my house, while they pursued a film deal with a major studio. When I came back to Los Angeles to settle my affairs, I sold my refrigerator on Craigslist, and hired a fellow to move the landlord’s refrigerator back into my flat – on Craigslist.

In most of the United States, Craigslist is the first stop for people interested in some sort of commercial transaction. It is now the 65th busiest website in the world, the 10th busiest in the United States – putting it up there with Yahoo!, Google, YouTube, MSN and eBay – and has about nine billion page views a month. None of the pages have advertising, nor are there any charges, except for job listings (and real estate listings in New York to keep unscrupulous realtors from flooding Craigslist with duplicate postings). Although it is still privately owned, and profits are kept secret, it’s estimated that Craigslist earns as much as USD $150 million from its job listings – while, with a staff of just 24 people, it costs perhaps a few million a year to keep the whole thing up and running. Quite a success story.

But everything has a downside. Craigslist has had an extraordinary effect on the entire publishing industry in North America. Newspapers, which funded their expensive editorial operations from the “rivers of gold” – car advertisements, job listings and classified ads – have found themselves completely “hollowed out” by Craigslist. Although the migration away from print to Craigslist began slowly, it has accelerated in the last few years, to the point where most people, in most circumstances will prefer to place a free listing in Craigslist than a paid listing in a newspaper. The listing will reach more people, and will cost them nothing to do so. That is an unbeatable economic proposition – unless you’re a newspaper.

It’s estimated that upwards of one billion dollars a year in advertising revenue is being lost to the newspapers because of Craigslist. This money isn’t flowing into Craig Newmark’s pocket – or rather, only a small amount of it is. Instead, because the marginal cost of posting an ad to Craigslist is effectively zero, Newmark is simply using the disruptive quality of pervasive network access to completely undercut the newspapers, while, at the same time, providing a better experience for his customers. This is an unbeatable economic proposition, one which is making Newmark a very rich man, even while it drives the Los Angeles Times ever closer to bankruptcy.

This is not Newmark’s fault, even if it is his doing. Newmark had the virtue of being in the right place (San Francisco) at the right time (1995) with the right idea (a community bulletin board). Everything that happened after that was driven entirely by the community of Craigslist’s users. This is not to say that Newmark isn’t incredible responsive to the needs of the Craigslist community – he is, and that responsiveness has served him well as Craigslist has grown and grown. But if Newmark hadn’t thought up this great idea, someone else would have. Nothing about Craigslist is even remotely difficult to create. A fairly ordinary web designer would be able to duplicate Craigslist’s features and functionality in less than a week’s worth of work. (But why bother? It already exists.) Newmark was servicing a need that no one even knew existed until after it had been created. Today, it seems perfectly obvious.

In a pervasively networked world, communities are fully empowered to create the resources they need to manage their lives. This act of creation happens completely outside of the existing systems of commerce (and copyright) that have formed the bulwarks of industrial age commerce. If an entire business sector gets crushed out of existence as a result, it’s barely even noticed by the community. This incredible empowerment – which I term “hyperempowerment” – is going to be one of the dominant features of public life in the 21st century. We have, as individuals and as communities, been gifted with incredible new powers – really, almost mutant ‘super powers’. We use them to achieve our own ends, without recognizing that we’ve just laid a city to waste.

Craigslist has not taken off in Australia. There are Craigslist sites for the “five capital cities” of Australia, but they’re only very infrequently visited. And, because they are only infrequently visited, they haven’t been able to build up enough content or user loyalty to create the virtuous cycle which has made Craigslist such a success in the United States. Why is this? It could be that the Trading Post has already got such a hold on the mindset of Australians that it’s the first place they think to place a listing. The Trading Post’s fees are low (fifty cents for a single non-car item), and it’s widely recognized, reaches a large community, etc. So that may be one reason.

Still, organizations like Fairfax and NEWS are scared to death of Craigslist. Back in 2004, Fairfax Digital launched Cracker.com.au, which provides free listings for everything except cars and jobs, which point back into the various paid advertising Fairfax websites. Australian newspaper publishers have already consigned classified advertising to the dustbin of history; they’re just waiting for the axe to fall. When it does, the Trading Post – among the most valuable of Testra/Sensis properties – will be almost entirely worthless. Telstra’s stockholders will scream, but the Australian public at large won’t care – they’ll be better served by a freely available resource which they’ve created and which they use to improve their business relations within Australia.

Case Two: Listings

In order to preserve business confidentiality, I won’t mention the name of my first Australian client, but they’re a well-known firm, publishers of traveler’s guides. The travel business, when I came to it in early 2006, was nearly unchanged from its form of the last fifty years: you send a writer to a far-away place, where they experience the delights and horrors of life, returning home to put it all into a manuscript which is edited, fact-checked, copy-edited, typeset, published and distributed. Book publishing is a famously human-intensive process – it takes an average of eighteen months for a book from a mainstream publisher to reach the marketplace, because each of these steps take time, effort and a lot of dollars. Nevertheless, a travel guide might need to be updated only twice a decade, and with global distribution it has always been fairly easy to recover the investment.

When I first met with my client, they wanted to know what might figure into the future of publishing. It turns out they knew the answer better than I did: they quickly pointed me to a new website, TripAdvisor.com. Although it is a for-profit website – earning money from bookings made through it – the various reviews and travel information provided on TripAdvisor.com are “user generated content,” that is, provided by folks who use TripAdvisor.com. Thus, a listing for a particular hotel will contain many reviews from people who have actually stayed at the hotel, each of whom have their own peccadilloes, needs, and interests. Reading through a handful of the reviews for any given hotel will give you a fairly rounded idea of what the establishment is really like.

This model of content creation and distribution is the exact opposite of the time-honored model practiced by travel publishers. Instead of an authoritative reviewer, the reviewing task is “crowdsourced” – literally given over to the community of users – to handle. The theory is that with enough reviews, some cogent body of opinion would emerge. While this seems fanciful on the face of it, it’s been proven time and again that this is an entirely successful model of knowledge production. Wikipedia, for example, has built an entire and entirely authoritative encyclopedia from user contributions – a body of knowledge far larger and at least as accurate as its nearest competitor, Encyclopaedia Britannica.

It’s still common for businesses to distrust user generated content. Movie studios nicknamed it “loser generated content”, even as their audiences turn from the latest bloated blockbuster toward YouTube. Britannica pooh-poohed Wikipedia , until an article in Nature, that bastion of scientific reporting, indicated that, on average, a Wikipedia article was nearly as accurate as a given article in Britannica. (This report came out in December 2005. Today, it’s likely an article in Wikipedia would be more accurate than an article in Britannica.) In short, businesses reject the “wisdom of crowds” at their peril.

We’ve only just discovered that a well-networked body politics has access to deep reservoirs of very specific knowledge; in some peculiar way, we are all boffins. We might be science boffins, or knitting boffins, or gearheads or simply know everything that’s ever been said about Stoner Rock. It doesn’t matter. We all have passions, and now that we have a way of sharing these passions with the world-at-large, this “collective intelligence” far outclasses the particulars of any professional organization seeking to serve up little slices of knowledge. This is a general challenge confronting all businesses and institutions in the 21st century. It’s quite commonplace today for a patient to walk into a doctor’s surgery knowing more about the specifics of an illness than the doctor does; this “Wikimedicine” is disparaged by medical professionals – but the truth is that an energized and well-networked community generally does serve its members better than any particular professional elite.

So what to do about about travel publishing in the era of TripAdvisor.com, and WikiTravel (another source of user-generated tourist information), and so on. How can a business possibly hope to compete with the community it hopes to profitably serve? When the question is put like this, it seems insoluable. But that simply indicates that the premise is flawed. This is not an us-versus-them situation, and here’s the key: the community, any community, respects expertise that doesn’t attempt to put on the airs of absolute authority. That travel publisher has built up an enormous reservoir of goodwill and brand recognition, and, simply by changing its attitude, could find a profitable way to work with the community. Publishers are no longer treated like Moses, striding down from Mount Sinai, commandments in hand. Publishing is a conversation, a deep engagement with the community of interest, where all parties are working as hard as they can to improve the knowledge and effectiveness of the community as a whole.

That simple transition from shoveling books out the door, into a community of knowledge building, has far reaching consequences. The business must refashion its own editorial processes and sensibilities around the community. Some of the job of winnowing the wheat from the chaff must be handed to the community, because there’s far too much for the editors to handle on their own. Yet the editors must be able to identify the best work of the community, and give that work pride of place, in order to improve the perceived value their role within the community.

Does this mean that the travel guide book is dead? A book is not dynamic or flexible, unlike a website. But neither does a book need batteries or an internet connection. Books have evolved through half a millennium of use to something that we find incredibly useful – even when resources are available online, we often prefer to use books. They are comfortable and very portable.

The book itself may be changing. It may not be something that is mass produced in lots of tens of thousands; rather, it may be individually printed for a community member, drawn from their own needs and interests. It represents their particular position and involvement, and is thus utterly personal. The technology for single-run publishing is now widespread; it isn’t terribly to print a single copy of a book. When that book can reflect the best editorial efforts of a brand known for high-quality travel publications plus the very best of the reviews and tips offered by an ever-growing community of travelers, it becomes something greater than the sum of its parts, a document in progress, an on-going evolution toward greater utility. It is an encapsulation of a conversation at a particular moment in time, necessarily incomplete, but, for that reason, intensely valuable.

Conversation is the mode not just for business communications, but for all business in the 21st century. Businesses which can not seize on the benefits of communication with the communities they serve will simply be swept aside (like newspapers) by communities in conversation. It is better to be in front of that wave, leading the way, than to drown in the riptide. But this is not an easy transition to make. It involves the fundamental rethinking of business practices and economic models. It’s a choice that will confront every business, everywhere, sometime in the next few years.

Case Three: Delisted

My final case study involves a recent client of mine, a very large university in New South Wales. I was invited in by the Director of Communications, to consult on a top-down redesign of the university’s web presence. After considerable effort an expenditure, the university had learned that their website was more-or-less unusable, particularly when compared against its competitors. It took users too many clicks to find the information they wanted, and that information wasn’t collated well, forcing visitors to traverse the site over and over to find the information they might want on a particular program of study. The new design would streamline the site, consolidate resources, and help prospective students quickly locate the information they would need to make their educational decisions.

That was all well and good, but a cursory investigation of web usage at the university indicated a larger and more fundamental problem: students had simply stopped using the online resources provided by the university, beyond the bare minimum needed to register for classes. The university had failed to keep up with innovations in the Web, falling dramatically out-of-step with its student population, who are all deeply engaged in emailing, social networking, blogging, photo sharing, link sharing, video sharing, and crowdsourcing. Even more significantly, the faculty of the university had set up many unauthorized web sites – using university computing resources – to provide web services that the university had not been able to offer. Both students and faculty had “left the farm” in search of the richer pastures found outside the carefully maintained walls of university computing. This collapse in utility has led to a “vicious cycle,” for the less the student or faculty member uses university resources, the less relevant they become, moving in a downward spiral which eventually sees all of the important knowledge creation processes of the university happening outside its bounds.

As the relevant information about the university (except what the university says about itself) escapes the confines of university resources, another serious consequence emerges: search engines no longer put the university at the top of search queries, simply because the most relevant information about the university is no longer hosted by the university. The organization has lost control of the conversation because it neglected to stay engaged in that conversation, tracking where and how its students and faculty were using the tools at hand to engage themselves in the processes of learning and knowledge formation. A Google search on a particular programme at the university could turn up a student’s assessment of the program as the first most relevant result, not the university’s authorized page.

This is a bigger problem than the navigability of a website, because it directly challenges the university’s authority to speak for itself. In the United States, the website RateMyProfessors.com has become the bane of all educational institutions, because students log onto the site and provide (reasonably) accurate information about the pedagogical capabilities of their instructors. An instructor who is a great researcher but a lousy teacher is quickly identified on this site, and students steer clear, having learned from their peers the pitfalls of a bad decision. On the other hand, students flock to lectures by the best lecturers, and these professors become hot items, either promoted to stay in place, or lured away by strong counter-offers. The collective intelligence of the community is running the show now, and that voice will only become stronger as better tools are developed to put it to work.

What could I offer as a solution for my client? All I could do was proscribe some bitter medicine. Yes, I told them, go forward with the website redesign – it is both necessary and useful. But I advised them to use that redesign as a starting point for a complete rethink of the services offered by the university. Students should be able to blog, share media, collaborate and create knowledge within the confines of the university, and it should be easier to do that – anywhere – than the alternative. Only when the grass is greener in the paddock will they be able to bring the students and faculty back onto the farm.

Furthermore, I advised the university to create the space for conversation within the university. Yes, some of it will be defamatory, or vile, or just unpleasant to hear. But the alternative – that this conversation happens elsewhere, outside of your ability to monitor and respond to it – would eventually prove catastrophic. Educational institutions everywhere – and all other institutions – are facing similar choices: do they ignore their constituencies or engage with them? Once engaged, how does that change the structure and power flows within their institutions? Can these institutions reorganize themselves, so that they become more permeable, pliable and responsive to the communities which they serve?

One again, these are not easy questions to answer. They touch on the fundamental nature of institutions of all varieties. A commercial organization has to confront these same questions, though the specifics will vary from organization to organization. The larger an organization grows, the louder the cry for conversation grows, and the more pressing its need. The largest institutions in Australia are most vulnerable to this sudden change in attitudes, because here it is most likely that sudden self-organizations within the body politic will rise to challenge them.

Conclusion: Over?

As you can see, the same themes appear and reappear in each of these three case studies. In each case some industry sector or institution confronts a pervasively networked public which can out-think, out-maneuver and massively out-compete an institution which formed in an era before the rise of the network. The balance of power has shifted decisively into the hands of the networked public.

The natural reaction of institutions of all stripes is to resist these changes; institutions are inherently conservative, seeking to cling to what has worked in the past, even if the past is no longer any guide to the future. Let me be very clear on this point: resistance is futile, and worse, the longer you resist, the stronger the force you will confront. If you attempt to dam up the tide of change, you will only ensure that the ensuing deluge will be that much greater. The pressure is rising; we are already pervasively networked in Australia, with nearly every able adult owning a mobile phone, with massive and growing broadband penetration, and with an increasing awareness that communities can self-organize to serve their own needs.

Something’s got to give. And it’s not going to be the public. They can’t be whipped or cowed or forced back into antique behaviors which no longer make sense to them. Instead, it is up to you, as business leaders, to embrace the public, engaging them in a continuous conversation that will utterly transform the way you do business.

No business is ever guaranteed success, but unless you embrace conversation as the essential business practice of the 21st century, you will find someone else, more flexible and more open, stealing your business away. It might be a competitor, or it might be your customers themselves, fed up with the old ways of doing business, and developing new ways to meet their own needs. Either way, everything is about to change.

Bass Ackward

I

There is a phrase that rings out across the meeting rooms of Silicon Valley so frequently it has an almost comic quality. Comic, because all replies to this phrase are lies, damned lies, and spreadsheets. Yet this phrase has become the axis mundi, around which orbits the enormous influence of California’s venture capital community.

“What’s your business model?”

It seems an innocent question. Businesses, after all, must have some mechanism in place to earn money. Manufacturers make things. Retailers sell things. Creatives license things. All very neat, straightforward, and – through the clarity of hindsight – absolutely simple. Yet radio had no business model for almost 20 years after its invention. Commercial radio did not emerge until the mid-1920s, when advertising and sponsorship drove the development of an industry. The personal computer, born in 1975, had no real business model behind it until VisiCalc was released in 1979. Commodore, Apple and Tandy sold tens of thousands of computers to hobbyists, but the spreadsheet created an industry.

This story can be told again and again. There’s that famous line from the founder of IBM, Thomas J. Watson, who predicted the market for computers in the “few tens of units, worldwide.” Or HP’s executives knocking back Steve Wozniak’s suggestion that HP manufacture a personal computer – they didn’t see the market for it. (HP is now the second largest producer of personal computers, worldwide.) We could blame these ridiculous miscalculations on a lack of foresight, the peculiar human ability to imagine an eternal present, where nothing ever changes. But time is change. Nothing remains the same. Novelty emerges continuously, often from the most unexpected quarters.

So why, then, when confronted with something new, does anyone ask, “What’s your business model?” How, with any confidence, could anyone know? Here’s the uncomfortable truth: no one knows. Instead, entrepreneurs lie, dissemble, and build spreadsheets which, like the fabric of the universe, emerge from random quantum noise, hoping that no one can see through to the reality of the situation – nothing truly novel has a business model.

This makes entrepreneurship less an exercise in creativity than salesmanship: it is up to the entrepreneur to convince venture capitalists that yes, this wholly novel invention is well-understood, and revenues from it can be calculated using a formula. While charismatic entrepreneurs can make that statement seem believable, they can not make it true.

This friction between novelty and predictability forms the essential feature of a “disruptive” technological innovation; novelty must emerge before its benefits can be forecast. An invention, in its earliest days, has not grown into its full properties. We do not ask of children, “What’s your business model?” Why, then, do we demand an answer when confronted by novelty?

II

We have just passed through an era of failed Internet business models. In the explosion of novelty which followed the advent of the Web in the mid-1990s, the charisma of the Web led many venture capitalists to behave irrationally, predicting too much upside for innovations which simply were not that novel. When – as was bound to happen – most of these businesses failed, the venture capitalists resolved to do better next time, and thus the mantra – “What’s your business model?” – began its steady echo throughout Silicon Valley.

In other cases, the causes for failure can be laid directly at the feet of these same venture capitalists, who forced immature innovations into “exit strategies” – either through acquisition or an initial public offering. But innovation, like human maturity, can not be hurried along. Grow up too quickly, and a lifetime of therapy follows. Push an innovation where it doesn’t belong, and it fails, catastrophically. Time is needed; time to nurture the innovation, and time for careful observation. That observation will tell the entrepreneur how the innovation is being used by the world. Before that happens – and it will normally take some years – any attempt to “guide” the innovation will thwart its true potential.

Novelty is a constructivist process. Like a child, intent on learning about the world by playing in the world, the novel innovation must be free to explore its own capabilities. It does this through the agency of many individuals and organizations who adopt the innovation for their own ends. The role for the entrepreneur (and the venture capitalist) during this phase of development is simply to keep the innovation in an enriched environment, constantly introducing new scenarios and communities who might benefit from the innovation. As William Gibson wrote, “The street finds its own use for things, uses its makers never intended.” Entrepreneurs must surrender an innovation to the world-at-large if they expect that innovation to come into its own. Innovations nurture their own language, coming into being hand-in-hand with the words that make them apprehensible, sensible, and predictable. Only after this has happened can any exploration of business models begin.

III

In recent months I’ve talked to individuals working to revitalize the film industry in Australia. Their approach? Think up ways to make filmmaking look like less of a gamble than it really – always – is. So they’ll bombard investors with spreadsheets, surveys, financial models, in an effort to answer the eternal question – “Will I make my money back?” Most films lose money in their theatrical release – here in Australia, and everywhere else – but that hasn’t kept the studios from earning lots of money; the money’s not in the films themselves, but in all the ancillary licensing and distribution deals enabled by the films. That’s not a business model that emerged overnight: the motion picture studios nearly collapsed in the 1970s, as they foraged around for a business model that could thrive in a world thoroughly colonized by television. Eventually, after the success of Star Wars and the VCR (which the studios fought, until it emerged that the VCR would make them more money than they’d ever earned in theatrical release), the business model became clear: make intellectual properties and license the hell out of them.

Why would the technology industry insist on a form of surety guaranteed to no other industry? Why would venture capitalists demand something they know, in their heart of hearts, is all smoke and mirrors? Why can’t they simply say, “We don’t care about business models. We’re looking for novel innovations with a capacity to emerge into successful businesses.” Part of it comes down to training: most venture capitalists have MBAs, and that education has made them painfully aware of the difference between successful and unsuccessful business models. Furthermore, having been so badly burned in the Web 1.0 bubble, venture capitalists are naturally suspicious of anything that doesn’t seem immediately substantial. Here we see the paradox: venture capitalists haven’t the discernment to know if, in the long term, any innovation has substance. No one does.

We need to enter an era where we simply do not care about business models. Entrepreneurs need to build something, get it out there, and let the street find its own use for it. They have to sit back, listen intently, and let things emerge on their own, in their own good time. That’s the lesson of Flickr, which started as a game, and ended as part of Yahoo! That’s the lesson of del.icio.us, which started as a project to allow individuals to share their ever-growing lists of bookmarks, before it, too, became part of Yahoo! And that’s the lesson of Wikipedia, which began as an alternative to a locked-up Encyclopedia Britannica, and matured to become the 16th most-visited site on the Internet. None of these, in their earliest incarnations, portrayed the potential of what they would become. The street had not yet found its own use for them. In hindsight, everything seems perfectly obvious. But an innovation, raw and new, can not be judged on its merits or its models: only the sunshine of time and the rain of the street can grow value from novelty.