The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

Crowdsource Yourself

I: Ruby Anniversary

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

I’ll jump to the chase: that roomful of academics at the Fall Joint Computer Conference saw the first broadly networked system featuring raster displays – the forerunner of all displays in use today; windowing; manipulation of on-screen information using a mouse; document storage and manipulation using the first hypertext system ever demonstrated, and videoconferencing between Engelbart, demoing in San Francisco, and his colleagues 30 miles away in Menlo Park.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!

The Alexandrine Dilemma

I: Crash Through or Crash

We live in a time of wonders, and, more often than not, remain oblivious to them until they fail catastrophically. On the 19th of October, 1999 we saw such a failure. After years of preparation, on that day the web-accessible version of Encyclopedia Britannica went on-line. The online version of Britannica contained the complete, unexpurgated content of the many-volume print edition, and it was freely available, at no cost to its users.

I was not the only person who dropped by on the 19th to sample Britannica’s wares. Several million others joined me – all at once. The Encyclopedia’s few servers suddenly succumbed to the overload of traffic – the servers crashed, the network connections crashed, everything crashed. When the folks at Britannica conducted a forensic analysis of the failure, they learned something shocking: the site had crashed because, within its first hours, it had attracted nearly fifty million visitors.

The Web had never seen anything like that before. Yes, there were search engines such as Yahoo! and AltaVista (and even Google), but destination websites never attracted that kind of traffic. Britannica, it seemed, had tapped into a long-standing desire for high-quality factual information. As the gold-standard reference work in the English language, Britannica needed no advertising to bring traffic to its web servers – all it need do was open its doors. Suddenly, everyone doing research, or writing a paper, or just plain interested in learning more about something tried to force themselves through Britannica’s too narrow doorway.

Encyclopedia Britannica ordered some more servers, and installed a bigger pipe to the Internet, and within a few weeks was back in business. Immediately Britannica became one of the most-trafficked sites on the Web, as people came through in search of factual certainty. Yet for all of that traffic, Britannica somehow managed to lose money.

The specifics of this elude my understanding. The economics of the Web are very simple: eyeballs equals money. The more eyeballs you have, the more money you earn. That’s as true for Google as for Britannica. Yet, somehow, despite having one of the busiest websites in the world, Britannica lost money. For that reason, just a few month after it freely opened its doors to the public, Britannica hid itself behind a “paywall”, asking seven dollars a month as a fee to access its inner riches. Immediately, traffic to Britannica dropped to perhaps a hundredth of its former numbers. Britannica did not convert many of its visitors to paying customers: there may be a strong desire for factual information, but even so, most people did not consider it worth paying for. Instead, individuals continued to search for a freely available, high quality source of factual information.

Into this vacuum Wikipedia was born. The encyclopedia that anyone can edit has always been freely available, and, because of its use of the Creative Commons license, can be freely copied. Wikipedia was the modern birth of “crowdsourcing”, the idea that vast numbers of anonymous individuals can labor together (at a distance) on a common project. Wikipedia’s openness in every respect – transparent edits, transparent governance, transparent goals – encouraged participation. People were invited to come by and sample the high-quality factual information on offer – and were encouraged to leave their own offerings. The high-quality facts encouraged visitors; some visitors would leave their own contributions, high-quality facts which would encourage more visitors, and so, in a “virtuous cycle”, Wikipedia grew as large as, then far larger than Encyclopedia Britannica.

Today, we don’t even give a thought to Britannica. It may be the gold-standard reference work in the English language, but no one cares. Wikipedia is good enough, accurate enough (although Wikipedia was never intended to be a competitor to Britannica by 2005 Nature was doing comparative testing of article accuracy) and is much more widely available. Britannica has had its market eaten up by Wikipedia, a market it dominated for two hundred years. It wasn’t the server crash that doomed Britannica; when the business minds at Britannica tried to crash through into profitability, that’s when they crashed into the paywall they themselves established. Watch carefully: over the next decade we’ll see the somewhat drawn out death of Britannica as it becomes ever less relevant in a Wikipedia-dominated landscape.

Just a few weeks ago, the European Union launched a new website, Europeana. Europeana is a repository, a collection of cultural heritage of Europe, made freely available to everyone in the world via the Web. From Descartes to Darwin to Debussy, Europeana hopes to become the online cultural showcase of European thought.

The creators of Europeana scoured Europe’s cultural institutions for items to be digitized and placed within its own collection. Many of these institutions resisted their requests – they didn’t see any demand for these items coming from online communities. As it turns out, these institutions couldn’t have been more wrong. Europeana launched on the 20th of November, and, like Britannica before it, almost immediately crashed. The servers overloaded as visitors from throughout the EU came in to look at the collection. Europeana has been taken offline for a few months, as the EU buys more servers and fatter pipes to connect it all to the Internet. Sometime late in 2008 it will relaunch, and, if its brief popularity is any indication, we can expect Europeana to become another important online resource, like Wikipedia.

All three of these examples prove that there is an almost insatiable interest in factual information made available online, whether the dry articles of Wikipedia or the more bouncy cultural artifacts of Europeana. It’s also clear that arbitrarily restricting access to factual information simply directs the flow around the institution restricting access. Britannica could be earning over a hundred million dollars a year from advertising revenue – that’s what it is projected that Wikipedia could earn, just from banner advertisements, if it ever accepted advertising. But Britannica chose to lock itself away from its audience. That is the one unpardonable sin in the network era: under no circumstances do you take yourself off the network. We all have to sink or swim, crash through or crash, in this common sea of openness.

I only hope that the European museums who have donated works to Europeana don’t suddenly grow possessive when the true popularity of their works becomes a proven fact. That will be messy, and will only hurt the institutions. Perhaps they’ll heed the lesson of Britannica; but it seems as though many of our institutions are mired in older ways of thinking, where selfishness and protecting the collection are seen as a cardinal virtues. There’s a new logic operating: the more something is shared, the more valuable it becomes.

II: The Universal Library

Just a few weeks ago, Google took this idea to new heights. In a landmark settlement of a long-running copyright dispute with book publishers in the United States, Google agreed to pay a license fee to those publishers for their copyrights – even for books out of print. In return, the publishers are allowing Google to index, search and display all of the books they hold under copyright. Google already provides the full text of many books which have an expired copyright – their efforts scanning whole libraries at Harvard and Stanford has given Google access to many such texts. Each of these texts is indexed and searchable – just as with the books under copyright, but, in this case, the full text is available through Google’s book reader tool. For works under copyright but out-of-print, Google is now acting as the sales agent, translating document searches into book sales for the publishers, who may now see huge “long tail” revenues generated from their catalogues.

Since Google is available from every computer connected to the Internet (given that it is available on most mobile handsets, it’s available to nearly every one of the four billion mobile subscribers on the planet), this new library – at least seven million volumes – has become available everywhere. The library has become coextensive with the Internet.

This was an early dream both of the pioneers of the personal computing, and, later, of the Web. When CD-ROM was introduced, twenty years ago, it was hailed as the “new papyrus,” capable of storing vast amounts of information in a richly hyperlinked format. As the limits of CD-ROM became apparent, the Web became the repository of the hopes of all the archivists and bibliophiles who dreamed of a new Library of Alexandria, a universal library with every text in every tongue freely available to all.

We have now gotten as close to that ideal as copyright law will allow; everything is becoming available, though perhaps not as freely as a librarian might like. (For libraries, Google has established subscription-based fees for access to books covered by copyright.) Within another few years, every book within arm’s length of Google (and Google has many, many arms) will be scanned, indexed and accessible through books.google.com. This library can be brought to bear everywhere anyone sits down before a networked screen. This library can serve billions, simultaneously, yet never exhaust its supply of texts.

What does this mean for the library as we have known it? Has Google suddenly obsolesced the idea of a library as a building stuffed with books? Is there any point in going into the stacks to find a book, when that same book is equally accessible from your laptop? Obviously, books are a better form factor than our laptops – five hundred years of human interface design have given us a format which is admirably well-adapted to our needs – but in most cases, accessibility trumps ease-of-use. If I can have all of the world’s books online, that easily bests the few I can access within any given library.

In a very real sense, Google is obsolescing the library, or rather, one of the features of the library, the feature we most identify with the library: book storage. Those books are now stored on servers, scattered in multiple, redundant copies throughout the world, and can be called up anywhere, at any time, from any screen. The library has been obsolesced because it has become universal; the stacks have gone virtual, sitting behind every screen. Because the idea of the library has become so successful, so universal, it no longer means anything at all. We are all within the library.

III: The Necessary Army

With the triumph of the universal library, we must now ask: What of the librarians? If librarians were simply the keepers-of-the-books, we would expect them to fade away into an obsolescence similar to the physical libraries. And though this is the popular perception of the librarian, in fact that is perhaps the least interesting of the tasks a librarian performs (although often the most visible).

The central task of the librarian – if I can be so bold as to state something categorically – is to bring order to chaos. The librarian takes a raw pile of information and makes it useful. How that happens differs from situation to situation, but all of it falls under the rubric of library science. At its most visible, the book cataloging systems used in all libraries represents the librarian’s best efforts to keep an overwhelming amount of information well-managed and well-ordered. A good cataloging system makes a library easy to use, whatever its size, however many volumes are available through its stacks.

It’s interesting to note that books.google.com uses Google’s text search-based interface. Based on my own investigations, you can’t type in a Library of Congress catalog number and get a list of books under that subject area. Google seems to have abandoned – or ignored – library science in its own book project. I can’t tell you why this is, I can only tell you that it looks very foolish and naïve. It may be that Google’s army of PhDs do not include many library scientists. Otherwise why would you have made such a beginner’s mistake? It smells of an amateur effort from a firm which is not known for amateurism.

It’s here that we can see the shape of the future, both in the immediate and longer term. People believe that because we’ve done with the library, we’re done with library science. They could not be more wrong. In fact, because the library is universal, library science now needs to be a universal skill set, more broadly taught than at any time previous to this. We have become a data-centric culture, and are presently drowning in data. It’s difficult enough for us to keep our collections of music and movies well organized; how can we propose to deal with collections that are a hundred thousand times larger?

This is not just some idle speculation; we are rapidly becoming a data-generating species. Where just a few years ago we might generate just a small amount of data on a given day or in a given week, these days we generate data almost continuously. Consider: every text message sent, every email received, every snap of a camera or camera phone, every slip of video shared amongst friends. It all adds up, and it all needs to be managed and stored and indexed and retrieved with some degree of ease. Otherwise, in a few years time the recent past will have disappeared into the fog of unsearchability. In order to have a connection to our data selves of the past, we are all going to need to become library scientists.

All of which puts you in a key position for the transformation already underway. You get to be the “life coaches” for our digital lifestyle, because, as these digital artifacts start to weigh us down (like Jacob Marley’s lockboxes), you will provide the guidance that will free us from these weights. Now that we’ve got it, it’s up to you to tell us how we find it. Now that we’ve captured it, it’s up to you to tell us how we index it.

We have already taken some steps along this journey: much of the digital media we create can now be “tagged”, that is, assigned keywords which provide context and semantic value for the media. We each create “clouds” of our own tags which evolve into “folksonomies”, or home-made taxonomies of meaning. Folksonomies and tagging are useful, but we lack the common language needed to make our digital treasures universally useful. If I tag a photograph with my own tags, that means the photograph is more useful to me; but it is not necessarily more broadly useful. Without a common, public taxonomy (a cataloging system), tagging systems will not scale into universality. That universality has value, because it allows us to extend our searches, our view, and our capability.

I could go on and on, but the basic point is this: wherever data is being created, that’s the opportunity for library science in the 21st century. Since data is being created almost absolutely everywhere, the opportunities for library science are similarly broad. It’s up to you to show us how it’s done, lest we drown in our own creations.

Some of this won’t come to pass until you move out of the libraries and into the streets. Library scientists have to prove their worth; most people don’t understand that they’re slowly drowning in a sea of their own information. This means you have to demonstrate other ways of working that are self-evident in their effectiveness. The proof of your value will be obvious. It’s up to you to throw the rest of us a life-preserver; once we’ve caught it, once we’ve caught on, your future will be assured.

The dilemma that confronts us is that for the next several years, people will be questioning the value of libraries; if books are available everywhere, why pay the upkeep on a building? Yet the value of a library is not the books inside, but the expertise in managing data. That can happen inside of a library; it has to happen somewhere. Libraries could well evolve into the resource the public uses to help manage their digital existence. Librarians will become partners in information management, indispensable and highly valued.

In a time of such radical and rapid change, it’s difficult to know exactly where things are headed. We know that books are headed online, and that libraries will follow. But we still don’t know the fate of librarians. I believe that the transition to a digital civilization will founder without a lot of fundamental input from librarians. We are each becoming archivists of our lives, but few of us have training in how to manage an archive. You are the ones who have that knowledge. Consider: the more something is shared, the more valuable it becomes. The more you share your knowledge, the more invaluable you become. That’s the future that waits for you.

Finally, consider the examples of Britannica and Europeana. The demand for those well-curated collections of information far exceeded even the wildest expectations of their creators. Something similar lies in store for you. When you announce yourselves to the broader public as the individuals empowered to help us manage our digital lives, you’ll doubtless find yourselves overwhelmed with individuals who are seeking to benefit from your expertise. What’s more, to deal with the demand, I expect Library Science to become one of the hot subjects of university curricula of the 21st century. We need you, and we need a lot more of you, if we ever hope to make sense of the wonderful wealth of data we’re creating.

This, That, and the Other

I. THIS.

If a picture paints a thousand words, you’ve just absorbed a million, the equivalent of one-and-a-half Bibles. That’s the way it is, these days. Nothing is small, nothing discrete, nothing bite-sized. Instead, we get the fire hose, 24 x 7, a world in which connection and community have become so colonized by intensity and amplification that nearly nothing feels average anymore.

Is this what we wanted? It’s become difficult to remember the before-time, how it was prior to an era of hyperconnectivity. We’ve spent the last fifteen years working out the most excellent ways to establish, strengthen and multiply the connections between ourselves. The job is nearly done, but now, as we put down our tools and pause to catch our breath, here comes the question we’ve dreaded all along…

Why. Why this?

I gave this question no thought at all as I blithely added friends to Twitter, shot past the limits of Dunbar’s Number, through the ridiculous, and then outward, approaching the sheer insanity of 1200 so-called-“friends” whose tweets now scroll by so quickly that I can’t focus on any one saying any thing because this motion blur is such that by the time I think to answer in reply, the tweet in question has scrolled off the end of the world.

This is ludicrous, and can not continue. But this is vital and can not be forgotten. And this is the paradox of the first decade of the 21st century: what we want – what we think we need – is making us crazy.

Some of this craziness is biological.

Eleven million years of evolution, back to Proconsul, the ancestor of all the hominids, have crafted us into quintessentially social creatures. We are human to the degree we are in relationship with our peers. We grew big forebrains, to hold banks of the chattering classes inside our own heads, so that we could engage these simulations of relationships in never-ending conversation. We never talk to ourselves, really. We engage these internal others in our thoughts, endlessly rehearsing and reliving all of the social moments which comprise the most memorable parts of life.

It’s crowded in there. It’s meant to be. And this has only made it worse.

No man is an island. Man is only man when he is part of a community. But we have limits. Homo Sapiens Sapiens spent two hundred thousand years exploring the resources afforded by a bit more than a liter of neural tissue. The brain has physical limits (we have to pass through the birth canal without killing our mothers) so our internal communities top out at Dunbar’s magic Number of 150, plus or minus a few.

Dunbar’s Number defines the crucial threshold between a community and a mob. Communities are made up of memorable and internalized individuals; mobs are unique in their lack of distinction. Communities can be held in one’s head, can be tended and soothed and encouraged and cajoled.

Four years ago, when I began my research into sharing and social networks, I asked a basic question: Will we find some way to transcend this biological limit, break free of the tyranny of cranial capacity, grow beyond the limits of Dunbar’s Number?

After all, we have the technology. We can hyperconnect in so many ways, through so many media, across the entire range of sensory modalities, it is as if the material world, which we have fashioned into our own image, wants nothing more than to boost our capacity for relationship.

And now we have two forces in opposition, both originating in the mind. Our old mind hews closely to the community and Dunbar’s Number. Our new mind seeks the power of the mob, and the amplification of numbers beyond imagination. This is the central paradox of the early 21st century, this is the rift which will never close. On one side we are civil, and civilized. On the other we are awesome, terrible, and terrifying. And everything we’ve done in the last fifteen years has simply pushed us closer to the abyss of the awesome.

We can not reasonably put down these new weapons of communication, even as they grind communities beneath them like so many old and brittle bones. We can not turn the dial of history backward. We are what we are, and already we have a good sense of what we are becoming. It may not be pretty – it may not even feel human – but this is things as they are.

When the historians of this age write their stories, a hundred years from now, they will talk about amplification as the defining feature of this entire era, the three hundred year span from industrial revolution to the emergence of the hyperconnected mob. In the beginning, the steam engine amplified the power of human muscle – making both human slavery and animal power redundant. In the end, our technologies of communication amplified our innate social capabilities, which eleven million years of natural selection have consistently selected for. Above and beyond all of our other natural gifts, those humans who communicate most effectively stand the greatest chance of passing their genes along to subsequent generations. It’s as simple as that. We talk our partners into bed, and always have.

The steam engine transformed the natural world into a largely artificial environment; the amplification of our muscles made us masters of the physical world. Now, the technologies of hyperconnectivity are translating the natural world, ruled by Dunbar’s Number, into the dominating influence of maddening crowd.

We are not prepared for this. We have no biological defense mechanism. We are all going to have to get used to a constant state of being which resembles nothing so much as a stack overflow, a consistent social incontinence, as we struggle to retain some aspects of selfhood amidst the constantly eroding pressure of the hyperconnected mob.

Given this, and given that many of us here today are already in the midst of this, it seems to me that the most useful tool any of us could have, moving forward into this future, is a social contextualizer. This prosthesis – which might live in our mobiles, or our nettops, or our Bluetooth headsets – will fill our limited minds with the details of our social interactions.

This tool will make explicit that long, Jacob Marley-like train of lockboxes that are our interactions in the techno-social sphere. Thus, when I introduce myself to you for the first or the fifteen hundredth time, you can be instantly brought up to date on why I am relevant, why I matter. When all else gets stripped away, each relationship has a core of salience which can be captured (roughly), and served up every time we might meet.

I expect that this prosthesis will come along sooner rather than later, and that it will rival Google in importance. Google took too much data and made it roughly searchable. This prosthesis will take too much connectivity and make it roughly serviceable. Given that we primarily social beings, I expect it to be a greater innovation, and more broadly disruptive.

And this prosthesis has precedents; at Xerox PARC they have been looking into a ‘human memory prosthesis’ for sufferers from senile dementia, a device which constantly jogs human memories as to task, place, and people. The world that we’re making for ourselves, every time we connect, is a place where we are all (in some relative sense) demented. Without this tool we will be entirely lost. We’re already slipping beneath the waves. We need this soon. We need this now.

I hope you’ll get inventive.

II. THAT.

Now that we have comfortably settled into the central paradox of our current era, with a world that is working through every available means to increase our connectivity, and a brain that is suddenly overloaded and sinking beneath the demands of the sum total of these connections, we need to ask that question: Exactly what is hyperconnectivity good for? What new thing does that bring us?

The easy answer is the obvious one: crowdsourcing. The action of a few million hyperconnected individuals resulted in a massive and massively influential work: Wikipedia. But the examples only begin there. They range much further afield.

Uni students have been sharing their unvarnished assessments of their instructors and lecturers. Ratemyprofessors.com has become the bête noire of the academy, because researchers who can’t teach find they have no one signing up for their courses, while the best lecturers, with the highest ratings, suddenly find themselves swarmed with offers for better teaching positions at more prestigious universities. A simply and easily implemented system of crowdsourced reviews has carefully undone all of the work of the tenure boards of the academy.

It won’t be long until everything else follows. Restaurant reviews – that’s done. What about reviews of doctors? Lawyers? Indian chiefs? Politicans? ISPs? (Oh, wait, we have that with Whirlpool.) Anything you can think of. Anything you might need. All of it will have been so extensively reviewed by such a large mob that you will know nearly everything that can be known before you sign on that dotted line.

All of this means that every time we gather together in our hyperconnected mobs to crowdsource some particular task, we become better informed, we become more powerful. Which means it becomes more likely that the hyperconnected mob will come together again around some other task suited to crowdsourcing, and will become even more powerful. That system of positive feedbacks – which we are already quite in the midst of – is fashioning a new polity, a rewritten social contract, which is making the institutions of the 19th and 20th centuries – that is, the industrial era – seem as antiquated and quaint as the feudal systems which they replaced.

It is not that these institutions are dying, but rather, they now face worthy competitors. Democracy, as an example, works well in communities, but can fail epically when it scales to mobs. Crowdsourced knowledge requires a mob, but that knowledge, once it has been collected, can be shared within a community, to hyperempower that community. This tug-of-war between communities and crowds is setting all of our institutions, old and new, vibrating like taught strings.

We already have a name for this small-pieces-loosely-joined form of social organization: it’s known as anarcho-syndicalism. Anarcho-Syndicalism emerged from the labor movements that grew in numbers and power toward the end of the 19th century. Its basic idea is simply that people will choose to cooperate more often than they choose to compete, and this cooperation can form the basis for a social, political and economic contract wherein the people manage themselves.

A system with no hierarchy, no bosses, no secrets, no politics. (Well, maybe that last one is asking too much.) Anarcho-syndicalism takes as a given that all men are created equal, and therefore each have a say in what they choose to do.

Somewhere back before Australia became a nation, anarcho-syndicalist trade unions like the Industrial Workers of the World (or, more commonly, the ‘Wobblies’) fought armies of mercenaries in the streets of the major industrial cities of the world, trying get the upper hand in the battle between labor and capital. They failed because capital could outmaneuver labor in the 19th century. Today the situation is precisely reversed. Capital is slow. Knowledge is fast, the quicksilver that enlivens all our activities.

I come before you today wearing my true political colors – literally. I did not pick a red jumper and black pants by some accident or wardrobe malfunction. These are the colors of anarcho-syndicalism. And that is the new System of the World.

You don’t have to believe me. You can dismiss my political posturing as sheer radicalism. But I ask you to cast your mind further than this stage this afternoon, and look out on a world which is permanently and instantaneously hyperconnected, and I ask you – how could things go any other way? Every day one of us invents a new way to tie us together or share what we know; as that invention is used, it is copied by those who see it being used.

When we imitate the successful behaviors of our hyperconnected peers, this ‘hypermimesis’ means that we are all already in a giant collective. It’s not a hive mind, and it’s not an overmind. It’s something weirdly in-between. Connected we are smarter by far than we are as individuals, but this connection conditions and constrains us, even as it liberates us. No gift comes for free.

I assert, on the weight of a growing mountain of evidence, that anarcho-syndicalism is the place where the community meets the crowd; it is the environment where this social prosthesis meets that radical hyperempowerment of capabilities.

Let me give you one example, happening right now. The classroom walls are disintegrating (and thank heaven for that), punctured by hyperconnectivity, as the outside world comes rushing in to meet the student, and the student leaves the classroom behind for the school of the world. The student doesn’t need to be in the classroom anymore, nor does the false rigor of the classroom need to be drilled into the student. There is such a hyperabundance of instruction and information available, students needs a mentor more than a teacher, a guide through the wilderness, and not a penitentiary to prevent their journey.

Now the students, and their parents – and the teachers and instructors and administrators – need to find a new way to work together, a communion of needs married to a community of gifts. The school is transforming into an anarcho-syndicalist collective, where everyone works together as peers, comes together in a “more perfect union”, to educate. There is no more school-as-a-place-you-go-to-get-your-book-learning. School is a state of being, an act of communion.

If this is happening to education, can medicine, and law, and politics be so very far behind? Of course not. But, unlike the elites of education, these other forces will resist and resist and resist all change, until such time as they have no choice but to surrender to mobs which are smarter, faster and more flexible than they are. In twenty years time they all these institutions will be all but unrecognizable.

All of this is light-years away from how our institutions have been designed. Those institutions – all institutions – are feeling the strain of informational overload. More than that, they’re now suffering the death of a thousand cuts, as the various polities serviced by each of these institutions actually outperform them.

You walk into your doctor’s office knowing more about your condition than your doctor. You understand the implications of your contract better than your lawyer. You know more about a subject than your instructor. That’s just the way it is, in the era of hyperconnectivity.

So we must band together. And we already have. We have come together, drawn by our interests, put our shoulders to the wheel, and moved the Earth upon its axis. Most specifically, those of you in this theatre with me this arvo have made the world move, because the Web is the fulcrum for this entire transformation. In less than two decades we’ve gone from physicists plaything to rewriting the rules of civilization.

But try not to think about that too much. It could go to your head.

III. THE OTHER.

Back in July, just after Vodafone had announced its meager data plans for iPhone 3G, I wrote a short essay for Ross Dawson’s Future of Media blog. I griped and bitched and spat the dummy, summing things up with this line:

“It’s time to show the carriers we can do this ourselves.”

I recommended that we start the ‘Future Australian Carrier’, or FAUC, and proceeded to invite all of my readers to get FAUCed. A harmless little incitement to action. What could possibly go wrong?

Within a day’s time a FAUC Facebook group had been started – without my input – and I was invited to join. Over the next two weeks about four hundred people joined that group, individuals who had simply had enough grief from their carriers and were looking for something better. After that, although there was some lively discussion about a possible logo, and some research into how MVNOs actually worked, nothing happened.

About a month later, individuals began to ping me, both on Facebook and via Twitter, asking, “What happened with that carrier you were going to start, Mark? Hmm?” As if somehow, I had signed on the dotted line to be chief executive, cheerleader, nose-wiper and bottle-washer for FAUC.

All of this caught me by surprise, because I certainly hadn’t signed up to create anything. I’d floated an idea, nothing more. Yet everyone was looking to me to somehow bring this new thing into being.

After I’d been hit up a few times, I started to understand where the epic !FAIL! had occurred. And the failure wasn’t really mine. You see, I’ve come to realize a sad and disgusting little fact about all of us: We need and we need and we need.

We need others to gather the news we read. We need others to provide the broadband we so greedily lap up. We need other to govern us. And god forbid we should be asked to shoulder some of the burden. We’ll fire off a thousand excuses about how we’re so time poor even the cat hasn’t been fed in a week.

So, sure, four hundred people might sign up to a Facebook group to indicate their need for a better mobile carrier, but would any of them think of stepping forward to spearhead its organization, its cash-raising, or it leasing agreements? No. That’s all too much hard work. All any of these people needed was cheap mobile broadband.

Well, cheap don’t come cheaply.

Of course, this happens everywhere up and down the commercial chain of being. QANTAS and Telstra outsource work to southern Asia because they can’t be bothered to pay for local help, because their stockholders can’t be bothered to take a small cut in their quarterly dividends.

There’s no difference in the act itself, just in its scale. And this isn’t even raw economics. This is a case of being penny-wise and pound-foolish. Carve some profit today, spend a fortune tomorrow to recover. We see it over and over and over again (most recently and most expensively on Wall Street), but somehow the point never makes it through our thick skulls. It’s probably because we human beings find it much easier to imagine three months into the future than three years. That’s a cognitive feature which helps if you’re on the African savannah, but sucks if you’re sitting in an Australian boardroom.

So this is the other thing. The ugly thing that no one wants to look at, because to look at it involves an admission of laziness. Well folks, let me be the first one here to admit it: I’m lazy. I’m too lazy to administer my damn Qmail server, so I use Gmail. I’m too lazy to setup WebDAV, so I use Google Docs. I’m too lazy to keep my devices synced, so I use MobileMe. And I’m too lazy to start my own carrier, so instead I pay a small fortune each month to Vodafone, for lousy service.

And yes, we’re all so very, very busy. I understand this. Every investment of time is a tradeoff. Yet we seem to defer, every time, to let someone else do it for us.

And is this wise? The more I see of cloud computing, the more I am convinced that it has become a single-point-of-failure for data communications. The decade-and-a-half that I spent as a network engineer tells me that. Don’t trust the cloud. Don’t trust redundancy. Trust no one. Keep your data in the cloud if you must, but for goodness’ sake, keep another copy locally. And another copy on the other side of the world. And another under your mattress.

I’m telling you things I shouldn’t have to tell you. I’m telling you things that you already know. But the other, this laziness, it’s built into our culture. Socially, we have two states of being: community and crowd. A community can collaborate to bring a new mobile carrier into being. A crowd can only gripe about their carrier. And now, as the strict lines between community and crowd get increasingly confused because of the upswing in hyperconnectivity, we behave like crowds when we really ought to be organizing like a community.

And this, at last, is the other thing: the message I really want to leave you with. You people, here in this auditorium today, you are the masters of the world. Not your bosses, not your shareholders, not your users. You. You folks, right here and right now. The keys to the kingdom of hyperconnectivity have been given to you. You can contour, shape and control that chaotic meeting point between community and crowd. That is what you do every time you craft an interface, or write a script. Your work helps people self-organize. Your work can engage us at our laziest, and turn us into happy worker bees. It can be done. Wikipedia has shown the way.

And now, as everything hierarchical and well-ordered dissolves into the grey goo which is the other thing, you have to ask yourself, “Who does this serve?”

At the end of the day, you’re answerable to yourself. No one else is going to do the heavy lifting for you. So when you think up an idea or dream up a design, consider this: Will it help people think for themselves? Will it help people meet their own needs? Or will it simply continue to infantilize us, until we become a planet of dummy-spitting, whinging, wankers?

It’s a question I ask myself, too, a question that’s shaping the decisions I make for myself. I want to make things that empower people, so I’ve decided to take some time to work with Andy Coffey, and re-think the book for the 21st century. Yes, that sounds ridiculous and ambitious and quixotic, but it’s also a development whose time is long overdue. If it succeeds at all, we will provide a publishing platform for people to share their long-form ideas. Everything about it will be open source and freely available to use, to copy, and to hack, because I already know that my community is smarter than I am.

And it’s a question I have answered for myself in another way. This is my third annual appearance before you at Web Directions South. It will be the last time for some time. You people are my community; where I knew none of you back in 2006; I consider many of you friends in 2008. Yet, when I talk to you like this, I get the uncomfortable feeling that my community has become a crowd. So, for the next few years, let’s have someone else do the closing keynote. I want to be with my peeps, in the audience, and on the Twitter backchannel, taking the piss and trading ideas.

The future – for all of us – is the battle over the boundary between the community and the crowd. I am choosing to embrace the community. It seems the right thing to do. And as I walk off-stage here, this afternoon, I want you to remember that each of you holds the keys to the kingdom. Our community is yours to shape as you will. Everything that you do is translated into how we operate as a culture, as a society, as a civilization. It can be a coming together, or it can be a breaking apart. And it’s up to you.

Not that there’s any pressure.

Everywhere

I.

Sydney looks very little different from the city of Gough Whitlam’s day. Although almost forty years have passed, we see most of the same concrete monstrosities at the Big End of town, the same terrace houses in Surry Hills and Paddington, the same mile-after-mile of brick dwellings in the outer suburbs. Sydney has grown a bit around the edges, bumping up against the natural frontiers of our national parks, but, for a time-traveler, most things would appear nearly exactly the same.

That said, the life of the city is completely different. This is not because a different generation of Australians, from all corners of the world, inhabit the city. Rather, the city has acquired a rich inner life, an interiority which, though invisible to the eye, has become entirely pervasive, and completely dominates our perceptions. We walk the streets of the city, but we swim through an invisible ether of information. Just a decade ago we might have been said to have jumped through puddles of data, hopping from one to another as a five year-old might in a summer rainstorm. But the levels have constantly risen, in a curious echo of global warming, until, today, we must swim hard to stay afloat.

The individuals in our present-day Sydney stride the streets with divided attention, one eye scanning the scene before them, and another almost invariably fiddling with a mobile phone: sending a text, returning a call, using the GPS satellites to locate an address. Where, four decades ago, we might have kept a wary eye on passers-by, today we focus our attentions into the palms of our hands, playing with our toys. The least significant of these toys are the stand-alone entertainment devices; the iPods and their ilk, which provide a continuous soundtrack for our lives, and which insulate us from the undesired interruptions of the city. These are pleasant, but unimportant.

The devices which allow us to peer into and sail the etheric sea of data which surrounds us, these are the important toys. It’s already become an accepted fact that a man leaves the house with three things in his possession: his wallet, his keys, and his mobile. I have a particular pat-down I practice as the door to my flat closes behind me, a ritual of reassurance that tells me that yes, I am truly ready for the world. This behavioral transformation was already well underway when I first visited Sydney in 1997, and learned, from my friends’ actions, that mobile phones acted as a social lubricant. Dates could be made, rescheduled, or broken on the fly, effortlessly, without the painful social costs associated with standing someone up.

This was not a unique moment; it was simply the first in an ever-increasing series of transformations of human behavior, as the social accelerator of continuous communication became a broadly-accepted feature of civilization. The transition to frictionless social intercourse was quickly followed by a series of innovations which removed much of the friction from business and government. As individuals we must work with institutions and bureaucracies, but we have more ways to reach into them – and they, into us – than ever before. Businesses, in particular, realized that they could achieve both productivity gains and cost savings by leveraging the new facilities of communication. This relationship between commerce and the consumer produced an accelerating set of feedbacks which translated the very physical world of commerce into an enormous virtual edifice, one which sought every possible advantage of virtualization, striving to reach its customers through every conceivable mechanism.

Now, as we head into the winter of 2008, we live in a world where a seemingly stable physical environment is entirely overlaid and overweighed by a virtual world of connection and communication. The physical world has, in large part, lost its significance. It’s not that we’ve turned away from the physical world, but rather, that the meaning of the physical world is now derived from our interactions within the virtual world. The conversation we have, between ourselves, and with the institutions which serve us, frame the world around us. A bank is no longer an imposing edifice with marble columns, but an EFTPOS swipe or a statement displayed in a web browser. The city is no longer streets and buildings, but flows of people and information, each invisibly connected through pervasive wireless networks.

It is already a wireless world. That battle was fought and won years ago; truly, before anyone knew the battle had been joined, it was effectively over. We are as wedded to this world as to the physical world – perhaps even more so. The frontlines of development no longer concern themselves with the deployment of wireless communications, but rather with their increasing utility.

II.

Utility has a value. How much is it worth to me to be able to tell a mate that I’m delayed in traffic and can’t make dinner on time? Is it worth a fifty-cent voice call, or a twenty-five cent text (which may go through several iterations, and, in the end, cost me more)? Clearly it is; we are willing to pay a steep price to keep our social relationships on an even keel. What about our business relationships? How much is it worth to be able to take a look at the sales brochure for a store before we enter it? How much is it worth to find it on a map, or get directions from where we are? How much is it worth to send an absolutely vital email to a business client?

These are the economics that have ruled the tariff structures of wireless communications, both here in Australia and in the rest of the world. Bandwidth, commonly thought of as a limited resource, must be paid for. Infrastructure must be paid for. Shareholders must receive a fair return on their investments. All of these points, while valid, do not tell the whole story. The tariff structure acts as a barrier to communication, a barrier which can only be crossed if the perceived value is greater than the costs incurred. In the situations outlined above, this is often the case, and is thus the basis for the wireless telelcomms industry. But there are other economics at work, and these economics dictate a revision to this monolithic ordering of business affairs.

Chris Anderson, the editor of WIRED magazine, has been writing a series of essays in preparation for the publication of his next book, Free: Why $0.00 is the Future of Business. In his first essay – published in WIRED magazine, of course – Anderson takes a look at Moore’s Law, which promises a two-fold decrease in transistor cost every eighteen months, a rule that’s proven continuously true since Intel co-founder Gordon Moore proposed it, back in 1965. Somewhere around 1973, Anderson notes, Carver Mead, the father of VLSI, realized that individual transistors were becoming so small and so cheap as to be essentially free. Yes, in aggregates of hundreds of millions, transistors cost a few tens of dollars. But at the level of single circuits, these transistors are free, and can be “wasted” to provide some additional functionality at essentially zero additional cost. When, toward the end of the 1970s, the semiconductor industry embraced Mead’s design methodology, the silicon revolution began in earnest, powered by ever-cheaper transistors that could, as far as the designer was concerned, be considered entirely expendable.

Google has followed a similar approach to profitability. Pouring hundreds of millions of dollars into a distributed, networked architecture which crawls and indexes the Web, Google provides its search engine for free, in the now-substantiated belief that something made freely available can still generate a very decent profit. Google designed its own, cheap computers, its own, cheap operating system, and fit these into its own, expensive data centers, linked together with relatively inexpensive bandwidth. Yahoo! and Microsoft – and Baidu and Facebook and MySpace – have followed similar paths to profitability. Make it free, and make money.

This seems counterintuitive, but herein is the difference between the physical and virtual worlds; the virtual world, insubstantial and pervasive, has its own economies of scale, which function very differently from the physical world. In the virtual world, the more a resource is shared, the more valuable it becomes, so ubiquity is the pathway to profitability.

We do not think of bandwidth as a virtual resource, one that can simply be burned. In Australia, we think of bandwidth as being an expensive and scarce resource. This is not true, and has never been particularly true. Over the time I’ve lived in this country (four and a half years) I’ve paid the same fixed amount for my internet bandwidth, yet today I have roughly six times the bandwidth, and seven times the download cap. Bandwidth is following the same curve as the transistor, because the cost of bandwidth is directly correlated to the cost of transistors.

Last year I upgraded to a 3G mobile handset, the Nokia N95, and immediately moved from GPRS speeds to HSDPA speeds – roughly 100x faster – but I am still spending the same amount for my mobile, on a monthly basis. I know that some Australian telcos see Vodafone’s tariff policy as sheer lunacy. But I reckon that Vodafone understands the economics of bandwidth. Vodafone understands that bandwidth is becoming free; the only way they can continue to benefit from my custom is if they continuously upgrade my service – just like my ISP.

Telco tariffs are predicated on the basic idea that spectrum is a limited resource. But spectrum is not a limited resource. Allocations are limited, yes, and licensed from the regulatory authorities for many millions of dollars a year. But spectrum itself is not in any wise limited. The 2.4 Ghz band is proof positive of this. Just that tiny slice of spectrum is responsible for more revenue than any other slice of spectrum, outside of the GSM and 3G bands. Why is this? Because the 2.4 Ghz band is unregulated, engineers and designers have had to teach their varied devices to play well with one another, even in hostile environments. I can use a Bluetooth headset right next to my WiFi-enabled MacBook, and never experience any problems, because these devices use spread-spectrum and spectrum-hopping to behave politely. My N95 can use WiFi and Bluetooth networking simultaneously – yet there’s never interference.

Unlicensed spectrum is not anarchy. It is an invitation to innovate. It is an open door to the creative engines of the economy. It is the most vital part of the entire wireless world, because it is the corner of the wireless world where bandwidth already is free.

III.

And so back to the city outside the convention center walls, crowded with four million people, each eagerly engaged in their own acts of communication. Yet these moments are bounded by an awareness of the costs of this communication. These tariffs act as a fundamental brake on the productivity of the Australian economy. They fetter the means of production. And so they must go.

I do not mean that we should nationalize the telcos – we’ve already been there – but rather, that we must engage in creating a new generation of untarriffed networks. The technology is already in place. We have cheap and durable mesh routers, such as the Open-Mesh and the Meraki, which can be dropped almost anywhere, powered by sun or by mains, and can create a network that spans nearly a quarter kilometer square. We can connect these access points to our wired networks, and share some small portion of our every-increasing bandwidth wealth with the public at large, so that no matter where they are in this city – or in this nation – they can access the wireless world. And we can secure these networks to prevent fraud and abuse.

Such systems already exist. In the past eight months, Meraki has given their $50 WiFi mesh routers to any San Franciscan willing to donate some of their ever-cheaper bandwidth to a freely available municipal network. When I started tracking the network, it had barely five thousand users. Today, it has over seventy thousand – that’s about one-tenth of the city. San Francisco is a city of hills and low buildings – it’s hard to get real reach from a wireless signal. In Sydney, Melbourne, Adelaide, Brisbane and Perth – which are all built on flats – a little signal goes a long, long way. From my flat in Surry Hills I can cover my entire neighborhood. If another of my neighbors decides to contribute, we can create a mesh which reaches further into my neighborhood, where it can link up with another volunteer, further in the neighborhood, and so on, and so on, until the entirety of my suburb is bathed in freely available wireless connectivity.

While this may sound like a noble idea, that is not the reason it is a good idea. Free wireless is a good idea because it enables an entirely new level of services, which would not, because of tariffs, make economic sense. This type of information has value – perhaps great value, to some – but no direct economic value. This is where the true strength of free wireless shows itself: it enables a broad participation in the electronic life of the city by all participants – individuals, businesses, and institutions – without the restraint of economic trade-offs.

This unlicensed participation has no form as yet, because we haven’t deployed the free wireless network beyond a few select spots in Australia’s cities. But, once the network has been deployed, some enterprising person will develop the “killer app” for this network, something so unexpected, yet so useful, that it immediately becomes apparent that the network is an incredibly valuable resource, one which will improve human connectivity, business productivity, and the delivery of services. Something that, once established, will be seen as an absolutely necessary feature in the life of the city.

Businessmen hate to deal in intangibles, or wild-eyed “science projects.” So instead, let me present you with a fait accompli: This is happening. We’re reaching a critical mass of Wifi devices in our dense urban cores. Translating these devices into nodes within city-spanning mesh networks requires only a simple software upgrade. It doesn’t require a hardware build-out. The transformation, when it comes, will happen suddenly and completely, and it will change the way we view the city.

The question then, is simple: are you going to wait for this day, or are you going to help it along? It could be slowed down, fettered by lawsuits and regulation. Or it could be accelerated into inevitability. We’re at a transition point now, between the tariffed networks we have lived with for the last decade, and the new, free networks, which are organically popping up in Australia and throughout the world. Both networks will co-exist; a free network actually increases the utility of a tariffed mobile network.

So, do you want to fight it? Or do you want to switch it on?