The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

This, That, and the Other

I. THIS.

If a picture paints a thousand words, you’ve just absorbed a million, the equivalent of one-and-a-half Bibles. That’s the way it is, these days. Nothing is small, nothing discrete, nothing bite-sized. Instead, we get the fire hose, 24 x 7, a world in which connection and community have become so colonized by intensity and amplification that nearly nothing feels average anymore.

Is this what we wanted? It’s become difficult to remember the before-time, how it was prior to an era of hyperconnectivity. We’ve spent the last fifteen years working out the most excellent ways to establish, strengthen and multiply the connections between ourselves. The job is nearly done, but now, as we put down our tools and pause to catch our breath, here comes the question we’ve dreaded all along…

Why. Why this?

I gave this question no thought at all as I blithely added friends to Twitter, shot past the limits of Dunbar’s Number, through the ridiculous, and then outward, approaching the sheer insanity of 1200 so-called-“friends” whose tweets now scroll by so quickly that I can’t focus on any one saying any thing because this motion blur is such that by the time I think to answer in reply, the tweet in question has scrolled off the end of the world.

This is ludicrous, and can not continue. But this is vital and can not be forgotten. And this is the paradox of the first decade of the 21st century: what we want – what we think we need – is making us crazy.

Some of this craziness is biological.

Eleven million years of evolution, back to Proconsul, the ancestor of all the hominids, have crafted us into quintessentially social creatures. We are human to the degree we are in relationship with our peers. We grew big forebrains, to hold banks of the chattering classes inside our own heads, so that we could engage these simulations of relationships in never-ending conversation. We never talk to ourselves, really. We engage these internal others in our thoughts, endlessly rehearsing and reliving all of the social moments which comprise the most memorable parts of life.

It’s crowded in there. It’s meant to be. And this has only made it worse.

No man is an island. Man is only man when he is part of a community. But we have limits. Homo Sapiens Sapiens spent two hundred thousand years exploring the resources afforded by a bit more than a liter of neural tissue. The brain has physical limits (we have to pass through the birth canal without killing our mothers) so our internal communities top out at Dunbar’s magic Number of 150, plus or minus a few.

Dunbar’s Number defines the crucial threshold between a community and a mob. Communities are made up of memorable and internalized individuals; mobs are unique in their lack of distinction. Communities can be held in one’s head, can be tended and soothed and encouraged and cajoled.

Four years ago, when I began my research into sharing and social networks, I asked a basic question: Will we find some way to transcend this biological limit, break free of the tyranny of cranial capacity, grow beyond the limits of Dunbar’s Number?

After all, we have the technology. We can hyperconnect in so many ways, through so many media, across the entire range of sensory modalities, it is as if the material world, which we have fashioned into our own image, wants nothing more than to boost our capacity for relationship.

And now we have two forces in opposition, both originating in the mind. Our old mind hews closely to the community and Dunbar’s Number. Our new mind seeks the power of the mob, and the amplification of numbers beyond imagination. This is the central paradox of the early 21st century, this is the rift which will never close. On one side we are civil, and civilized. On the other we are awesome, terrible, and terrifying. And everything we’ve done in the last fifteen years has simply pushed us closer to the abyss of the awesome.

We can not reasonably put down these new weapons of communication, even as they grind communities beneath them like so many old and brittle bones. We can not turn the dial of history backward. We are what we are, and already we have a good sense of what we are becoming. It may not be pretty – it may not even feel human – but this is things as they are.

When the historians of this age write their stories, a hundred years from now, they will talk about amplification as the defining feature of this entire era, the three hundred year span from industrial revolution to the emergence of the hyperconnected mob. In the beginning, the steam engine amplified the power of human muscle – making both human slavery and animal power redundant. In the end, our technologies of communication amplified our innate social capabilities, which eleven million years of natural selection have consistently selected for. Above and beyond all of our other natural gifts, those humans who communicate most effectively stand the greatest chance of passing their genes along to subsequent generations. It’s as simple as that. We talk our partners into bed, and always have.

The steam engine transformed the natural world into a largely artificial environment; the amplification of our muscles made us masters of the physical world. Now, the technologies of hyperconnectivity are translating the natural world, ruled by Dunbar’s Number, into the dominating influence of maddening crowd.

We are not prepared for this. We have no biological defense mechanism. We are all going to have to get used to a constant state of being which resembles nothing so much as a stack overflow, a consistent social incontinence, as we struggle to retain some aspects of selfhood amidst the constantly eroding pressure of the hyperconnected mob.

Given this, and given that many of us here today are already in the midst of this, it seems to me that the most useful tool any of us could have, moving forward into this future, is a social contextualizer. This prosthesis – which might live in our mobiles, or our nettops, or our Bluetooth headsets – will fill our limited minds with the details of our social interactions.

This tool will make explicit that long, Jacob Marley-like train of lockboxes that are our interactions in the techno-social sphere. Thus, when I introduce myself to you for the first or the fifteen hundredth time, you can be instantly brought up to date on why I am relevant, why I matter. When all else gets stripped away, each relationship has a core of salience which can be captured (roughly), and served up every time we might meet.

I expect that this prosthesis will come along sooner rather than later, and that it will rival Google in importance. Google took too much data and made it roughly searchable. This prosthesis will take too much connectivity and make it roughly serviceable. Given that we primarily social beings, I expect it to be a greater innovation, and more broadly disruptive.

And this prosthesis has precedents; at Xerox PARC they have been looking into a ‘human memory prosthesis’ for sufferers from senile dementia, a device which constantly jogs human memories as to task, place, and people. The world that we’re making for ourselves, every time we connect, is a place where we are all (in some relative sense) demented. Without this tool we will be entirely lost. We’re already slipping beneath the waves. We need this soon. We need this now.

I hope you’ll get inventive.

II. THAT.

Now that we have comfortably settled into the central paradox of our current era, with a world that is working through every available means to increase our connectivity, and a brain that is suddenly overloaded and sinking beneath the demands of the sum total of these connections, we need to ask that question: Exactly what is hyperconnectivity good for? What new thing does that bring us?

The easy answer is the obvious one: crowdsourcing. The action of a few million hyperconnected individuals resulted in a massive and massively influential work: Wikipedia. But the examples only begin there. They range much further afield.

Uni students have been sharing their unvarnished assessments of their instructors and lecturers. Ratemyprofessors.com has become the bête noire of the academy, because researchers who can’t teach find they have no one signing up for their courses, while the best lecturers, with the highest ratings, suddenly find themselves swarmed with offers for better teaching positions at more prestigious universities. A simply and easily implemented system of crowdsourced reviews has carefully undone all of the work of the tenure boards of the academy.

It won’t be long until everything else follows. Restaurant reviews – that’s done. What about reviews of doctors? Lawyers? Indian chiefs? Politicans? ISPs? (Oh, wait, we have that with Whirlpool.) Anything you can think of. Anything you might need. All of it will have been so extensively reviewed by such a large mob that you will know nearly everything that can be known before you sign on that dotted line.

All of this means that every time we gather together in our hyperconnected mobs to crowdsource some particular task, we become better informed, we become more powerful. Which means it becomes more likely that the hyperconnected mob will come together again around some other task suited to crowdsourcing, and will become even more powerful. That system of positive feedbacks – which we are already quite in the midst of – is fashioning a new polity, a rewritten social contract, which is making the institutions of the 19th and 20th centuries – that is, the industrial era – seem as antiquated and quaint as the feudal systems which they replaced.

It is not that these institutions are dying, but rather, they now face worthy competitors. Democracy, as an example, works well in communities, but can fail epically when it scales to mobs. Crowdsourced knowledge requires a mob, but that knowledge, once it has been collected, can be shared within a community, to hyperempower that community. This tug-of-war between communities and crowds is setting all of our institutions, old and new, vibrating like taught strings.

We already have a name for this small-pieces-loosely-joined form of social organization: it’s known as anarcho-syndicalism. Anarcho-Syndicalism emerged from the labor movements that grew in numbers and power toward the end of the 19th century. Its basic idea is simply that people will choose to cooperate more often than they choose to compete, and this cooperation can form the basis for a social, political and economic contract wherein the people manage themselves.

A system with no hierarchy, no bosses, no secrets, no politics. (Well, maybe that last one is asking too much.) Anarcho-syndicalism takes as a given that all men are created equal, and therefore each have a say in what they choose to do.

Somewhere back before Australia became a nation, anarcho-syndicalist trade unions like the Industrial Workers of the World (or, more commonly, the ‘Wobblies’) fought armies of mercenaries in the streets of the major industrial cities of the world, trying get the upper hand in the battle between labor and capital. They failed because capital could outmaneuver labor in the 19th century. Today the situation is precisely reversed. Capital is slow. Knowledge is fast, the quicksilver that enlivens all our activities.

I come before you today wearing my true political colors – literally. I did not pick a red jumper and black pants by some accident or wardrobe malfunction. These are the colors of anarcho-syndicalism. And that is the new System of the World.

You don’t have to believe me. You can dismiss my political posturing as sheer radicalism. But I ask you to cast your mind further than this stage this afternoon, and look out on a world which is permanently and instantaneously hyperconnected, and I ask you – how could things go any other way? Every day one of us invents a new way to tie us together or share what we know; as that invention is used, it is copied by those who see it being used.

When we imitate the successful behaviors of our hyperconnected peers, this ‘hypermimesis’ means that we are all already in a giant collective. It’s not a hive mind, and it’s not an overmind. It’s something weirdly in-between. Connected we are smarter by far than we are as individuals, but this connection conditions and constrains us, even as it liberates us. No gift comes for free.

I assert, on the weight of a growing mountain of evidence, that anarcho-syndicalism is the place where the community meets the crowd; it is the environment where this social prosthesis meets that radical hyperempowerment of capabilities.

Let me give you one example, happening right now. The classroom walls are disintegrating (and thank heaven for that), punctured by hyperconnectivity, as the outside world comes rushing in to meet the student, and the student leaves the classroom behind for the school of the world. The student doesn’t need to be in the classroom anymore, nor does the false rigor of the classroom need to be drilled into the student. There is such a hyperabundance of instruction and information available, students needs a mentor more than a teacher, a guide through the wilderness, and not a penitentiary to prevent their journey.

Now the students, and their parents – and the teachers and instructors and administrators – need to find a new way to work together, a communion of needs married to a community of gifts. The school is transforming into an anarcho-syndicalist collective, where everyone works together as peers, comes together in a “more perfect union”, to educate. There is no more school-as-a-place-you-go-to-get-your-book-learning. School is a state of being, an act of communion.

If this is happening to education, can medicine, and law, and politics be so very far behind? Of course not. But, unlike the elites of education, these other forces will resist and resist and resist all change, until such time as they have no choice but to surrender to mobs which are smarter, faster and more flexible than they are. In twenty years time they all these institutions will be all but unrecognizable.

All of this is light-years away from how our institutions have been designed. Those institutions – all institutions – are feeling the strain of informational overload. More than that, they’re now suffering the death of a thousand cuts, as the various polities serviced by each of these institutions actually outperform them.

You walk into your doctor’s office knowing more about your condition than your doctor. You understand the implications of your contract better than your lawyer. You know more about a subject than your instructor. That’s just the way it is, in the era of hyperconnectivity.

So we must band together. And we already have. We have come together, drawn by our interests, put our shoulders to the wheel, and moved the Earth upon its axis. Most specifically, those of you in this theatre with me this arvo have made the world move, because the Web is the fulcrum for this entire transformation. In less than two decades we’ve gone from physicists plaything to rewriting the rules of civilization.

But try not to think about that too much. It could go to your head.

III. THE OTHER.

Back in July, just after Vodafone had announced its meager data plans for iPhone 3G, I wrote a short essay for Ross Dawson’s Future of Media blog. I griped and bitched and spat the dummy, summing things up with this line:

“It’s time to show the carriers we can do this ourselves.”

I recommended that we start the ‘Future Australian Carrier’, or FAUC, and proceeded to invite all of my readers to get FAUCed. A harmless little incitement to action. What could possibly go wrong?

Within a day’s time a FAUC Facebook group had been started – without my input – and I was invited to join. Over the next two weeks about four hundred people joined that group, individuals who had simply had enough grief from their carriers and were looking for something better. After that, although there was some lively discussion about a possible logo, and some research into how MVNOs actually worked, nothing happened.

About a month later, individuals began to ping me, both on Facebook and via Twitter, asking, “What happened with that carrier you were going to start, Mark? Hmm?” As if somehow, I had signed on the dotted line to be chief executive, cheerleader, nose-wiper and bottle-washer for FAUC.

All of this caught me by surprise, because I certainly hadn’t signed up to create anything. I’d floated an idea, nothing more. Yet everyone was looking to me to somehow bring this new thing into being.

After I’d been hit up a few times, I started to understand where the epic !FAIL! had occurred. And the failure wasn’t really mine. You see, I’ve come to realize a sad and disgusting little fact about all of us: We need and we need and we need.

We need others to gather the news we read. We need others to provide the broadband we so greedily lap up. We need other to govern us. And god forbid we should be asked to shoulder some of the burden. We’ll fire off a thousand excuses about how we’re so time poor even the cat hasn’t been fed in a week.

So, sure, four hundred people might sign up to a Facebook group to indicate their need for a better mobile carrier, but would any of them think of stepping forward to spearhead its organization, its cash-raising, or it leasing agreements? No. That’s all too much hard work. All any of these people needed was cheap mobile broadband.

Well, cheap don’t come cheaply.

Of course, this happens everywhere up and down the commercial chain of being. QANTAS and Telstra outsource work to southern Asia because they can’t be bothered to pay for local help, because their stockholders can’t be bothered to take a small cut in their quarterly dividends.

There’s no difference in the act itself, just in its scale. And this isn’t even raw economics. This is a case of being penny-wise and pound-foolish. Carve some profit today, spend a fortune tomorrow to recover. We see it over and over and over again (most recently and most expensively on Wall Street), but somehow the point never makes it through our thick skulls. It’s probably because we human beings find it much easier to imagine three months into the future than three years. That’s a cognitive feature which helps if you’re on the African savannah, but sucks if you’re sitting in an Australian boardroom.

So this is the other thing. The ugly thing that no one wants to look at, because to look at it involves an admission of laziness. Well folks, let me be the first one here to admit it: I’m lazy. I’m too lazy to administer my damn Qmail server, so I use Gmail. I’m too lazy to setup WebDAV, so I use Google Docs. I’m too lazy to keep my devices synced, so I use MobileMe. And I’m too lazy to start my own carrier, so instead I pay a small fortune each month to Vodafone, for lousy service.

And yes, we’re all so very, very busy. I understand this. Every investment of time is a tradeoff. Yet we seem to defer, every time, to let someone else do it for us.

And is this wise? The more I see of cloud computing, the more I am convinced that it has become a single-point-of-failure for data communications. The decade-and-a-half that I spent as a network engineer tells me that. Don’t trust the cloud. Don’t trust redundancy. Trust no one. Keep your data in the cloud if you must, but for goodness’ sake, keep another copy locally. And another copy on the other side of the world. And another under your mattress.

I’m telling you things I shouldn’t have to tell you. I’m telling you things that you already know. But the other, this laziness, it’s built into our culture. Socially, we have two states of being: community and crowd. A community can collaborate to bring a new mobile carrier into being. A crowd can only gripe about their carrier. And now, as the strict lines between community and crowd get increasingly confused because of the upswing in hyperconnectivity, we behave like crowds when we really ought to be organizing like a community.

And this, at last, is the other thing: the message I really want to leave you with. You people, here in this auditorium today, you are the masters of the world. Not your bosses, not your shareholders, not your users. You. You folks, right here and right now. The keys to the kingdom of hyperconnectivity have been given to you. You can contour, shape and control that chaotic meeting point between community and crowd. That is what you do every time you craft an interface, or write a script. Your work helps people self-organize. Your work can engage us at our laziest, and turn us into happy worker bees. It can be done. Wikipedia has shown the way.

And now, as everything hierarchical and well-ordered dissolves into the grey goo which is the other thing, you have to ask yourself, “Who does this serve?”

At the end of the day, you’re answerable to yourself. No one else is going to do the heavy lifting for you. So when you think up an idea or dream up a design, consider this: Will it help people think for themselves? Will it help people meet their own needs? Or will it simply continue to infantilize us, until we become a planet of dummy-spitting, whinging, wankers?

It’s a question I ask myself, too, a question that’s shaping the decisions I make for myself. I want to make things that empower people, so I’ve decided to take some time to work with Andy Coffey, and re-think the book for the 21st century. Yes, that sounds ridiculous and ambitious and quixotic, but it’s also a development whose time is long overdue. If it succeeds at all, we will provide a publishing platform for people to share their long-form ideas. Everything about it will be open source and freely available to use, to copy, and to hack, because I already know that my community is smarter than I am.

And it’s a question I have answered for myself in another way. This is my third annual appearance before you at Web Directions South. It will be the last time for some time. You people are my community; where I knew none of you back in 2006; I consider many of you friends in 2008. Yet, when I talk to you like this, I get the uncomfortable feeling that my community has become a crowd. So, for the next few years, let’s have someone else do the closing keynote. I want to be with my peeps, in the audience, and on the Twitter backchannel, taking the piss and trading ideas.

The future – for all of us – is the battle over the boundary between the community and the crowd. I am choosing to embrace the community. It seems the right thing to do. And as I walk off-stage here, this afternoon, I want you to remember that each of you holds the keys to the kingdom. Our community is yours to shape as you will. Everything that you do is translated into how we operate as a culture, as a society, as a civilization. It can be a coming together, or it can be a breaking apart. And it’s up to you.

Not that there’s any pressure.