The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Mothers of Innovation

Introduction:  Olden Days

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store.  I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system.  Prominently displayed at the front of the store, I spied my first Macintosh.  It wasn’t known as a Mac 128K or anything like that.  It was simply Macintosh.  I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse.  Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen.  But how do I get it to do anything? I wondered.  Click.  Nothing.  Click, drag – oh look some of these things changed color!  But now what?  Gah.  This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure.  Graphical computing isn’t intuitive, that’s a bit of a marketing fib.  It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment.  The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010.  Twenty-six years ago, it was a different story.  You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods.  Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers.  (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.)  We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends.  Which I did.  Every weekend.  The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh.  Presented with a perfectly white screen, you dragged the mouse along the display.  The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker.  For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation.  It was, in the vernacular of Boston, ‘wicked’.  I couldn’t stop playing with Hendrix.  I invited friends over, showed them, and they couldn’t stop playing with Hendrix.  Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for.  Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity.  I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity.  When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity.  That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G).  I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us.  So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I:  The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system.  I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death.  (And, if I know how to read a stack trace, that was probably caused by Flash.  Go figure.)  OSX is solid, it’s modestly secure, and it has plenty of eye candy.  My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words.  Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication.  This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET.  Phase one: build the network.  Phase two: connect everything to the network.  Phase three: PROFIT!

That seems to have worked out pretty much according to plan.  Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected.  All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing.  Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom.  Blame our brains.  They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies.  We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it?   Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there.  Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts.  They take chaos and organize it.  Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals.  They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds.  All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention.  We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity.  The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things?  All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them?  Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload?  Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure.  If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser.  Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on.  This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on.  I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else.  I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea.  Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last ‘social message manager’ that anyone will need.  It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging.  But I’m not coy.  I’ll come right out and say it: Plexus is an open-source project, and I need some help here.  That’s a reflection of the fact that we all need some help here.  We’re being clubbed into submission by our connectivity.  I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves.  That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve.  Finally, some time to think.  Some time to be.  Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds.  We’re too busy attending to the next message.  We need to disconnect periodically, and focus on the real.  We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others.  We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward.  It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become.  The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another.  We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent.  My unit was going on the auction block – would I mind moving out?  Immediately?  I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance.  The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird.  There were no signs posted in my building, no indication that either of the units were for rent.  I’d heard nothing from the few neighbors I know well enough to chat with.  They didn’t know either.  Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it.  Why?  Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs.  One of the pleasures of the city is its anonymity.  That’s also one of it’s great dangers.  The two go hand-in-hand.  Yet the world of 2010 does not offer up this kind of anonymity easily.  Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again.  How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to?  Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device.  It is the place where our of the human relationships reside.  (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.)  Yet the mobile is more than just the social crossroads.  It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView.  I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long.  As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be.  The mobile is my networked, location-aware sensor.  It gathers up all of the information I need to ease my path through life.  This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment.  The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack.  It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention.  After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time.  I asked her, “When is your mobile ever more than a few meters away from you?  How much difference would it make?  What do you gain by sticking it underneath your skin?”  I didn’t even bother to mention the danger from all that subcutaneous microwave radiation.  It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be.  Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential.  It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection.  We’re addicted, and the first step to treating that addiction is to admit we have  problem.  But here’s the dilemma: we’re working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers.  And I’m encouraging you to make better crack.  Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it.  It’s an accommodation we will need to make, a give-and-take.  We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there’s more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop.  I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration.  When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place.  This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole.  There’s clearly a need for it, but that’s just the beginning.  This is automated, computational decision making.  It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra.  When I arrived there, at 6 pm, I wondered where to have dinner.  Canberra is not known for its scintillating nightlife – I had no idea where to dine.  I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests?  Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters?  The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves.  All of it comes together in our hands.  The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together.  It becomes hypnotizing and formidable, though it rings less and less.  Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket.  Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live.  When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network.  Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations.  I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming.  That’s the potential that we hold in the palms of our hands.  That’s the world you can bring to life.

III:  Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions.  There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001.  But it never happened.  Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience.  It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years.  It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface.  Yet that alone was not enough.  I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated.  But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months.  That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone.  iPad is not simply a gargantuan iPod Touch.  The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience:  I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece.  I had about 48 hours to do the work.  I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them.  Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting.  Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account.  Once uploaded, I could access those files from my iPad – all thousand or so pages.  Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written.  This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate.  iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper.  That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago.  The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world.  It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone.  The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear.  So stretch your mind further than the idea of the app.  Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it.  That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there.  Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model.  That will come so that Apple can enable FaceTime.  (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.)  Once the iPad can see you, it can respond to you.  It can acknowledge your presence in an authentic manner.  We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in.  We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it.  But that’s not where they’re looking, this is not about keeping you entertained.  The iPad can entertain you, but that’s not its main design focus.  It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures.  At that point it is no longer a mirror; it is an entity on its own.  It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby.  It will become our constant companion, helping us through every situation.  And it will move seamlessly between our devices, from iPad to iPhone to desktop.  But it will begin on iPad.

Because we are just starting out with tablets, anything is possible.  We haven’t established expectations which guide us into a particular way of thinking about the device.  We’ve had mobiles for nearly twenty years, and desktops for thirty.  We understand both well, and with that understanding comes a narrowing of possibilities.  The tablet is the undiscovered country, virgin, green, waiting to be explored.  This is the desktop revolution, all over again.  This is the mobile revolution, all over again.  We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet.  I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer.  I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing.  So think big.  This is the time for it.  In another few years it will be difficult to aim for the stars.  The platform will be carrying too much baggage.  Right now we all get to be rocket scientists.  Right now we get to play, and dream, and make it all real.

Fluid Learning

I: Out of Control

Our greatest fear, in bringing computers into the classroom, is that we teachers and instructors and lecturers will lose control of the classroom, lose touch with the students, lose the ability to make a difference. The computer is ultimately disruptive. It offers greater authority than any instructor, greater resources than any lecturer, and greater reach than any teacher. The computer is not perfect, but it is indefatigable. The computer is not omniscient, but it is comprehensive. The computer is not instantaneous, but it is faster than any other tool we’ve ever used.

All of this puts the human being at a disadvantage; in a classroom full of machines, the human factor in education is bound to be overlooked. Even though we know that everyone learns more effectively when there’s a teacher or mentor present, we want to believe that everything can be done with the computer. We want the machines to distract, and we hope that in that distraction some education might happen. But distraction is not enough. There must be a point to the exercise, some reason that makes all the technology worthwhile. That search for a point – a search we are still mostly engaged in – will determine whether these computers are meaningful to the educational process, or if they are an impediment to learning.

It’s all about control.

What’s most interesting about the computer is how it puts paid to all of our cherished fantasies of control. The computer – or, most specifically, the global Internet connected to it – is ultimately disruptive, not just to the classroom learning experience, but to the entire rationale of the classroom, the school, the institution of learning. And if you believe this to be hyperbolic, this story will help to convince you.

In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc. Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

This is not something that anyone expected; it certainly wasn’t what John Swapceinski had in mind when founded Teacher Ratings. He wasn’t trying to overturn the prerogatives of heads of school around the world. He was simply offering up a place for people to pool their knowledge. That knowledge, once pooled, takes on a life of its own, and finds itself in places where it has uses that its makers never intended.

This rating system serves as an archetype for what it is about to happen to education in general. If we are smart enough, we can learn a lesson here and now that we will eventually learn – rather more expensively – if we wait. The lesson is simple: control is over. This is not about control anymore. This is about finding a way to survive and thrive in chaos.

The chaos is not something we should be afraid of. Like King Canute, we can’t roll back the tide of chaos that’s rolling over us. We can’t roll back the clock to an earlier age without computers, without Internet, without the subtle but profound distraction of text messaging. The school is of its time, not out it. Which means we must play the hand we’ve been dealt. That’s actually a good thing, because we hold a lot of powerful cards, or can, if we choose to face the chaos head on.

II: Do It Ourselves

If we take the example of RateMyProfessors.com and push it out a little bit, we can see the shape of things to come. But there are some other trends which are also becoming visible. The first and most significant of these is the trend toward sharing lecture material online, so that it reaches a very large audience. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, in some sense, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open University”, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructor facilitates and mentors, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial darshan with the instructor, it will have a physical local, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode – vanishing online – and explode – the world will become the classroom.

This, then, can already be predicted from current trends; once RateMyProfessors.com succeeded in destabilizing the institutional hierarchies in education, everything else became inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: All and Everything

Flexibility and fluidity are the hallmark qualities of the 21st century educational institution. An analysis of the atomic features of the educational process shows that the course is a series of readings, assignments and lectures that happen in a given room on a given schedule over a specific duration. In our drive to flexibility how can we reduce the class into to essential, indivisible elements? How can we capture those elements? Once captured, how can we get these elements to the students? And how can the students share elements which they’ve found in their own studies?

Recommendation #1: Capture Everything

I am constantly amazed that we simply do not record almost everything that occurs in public forums as a matter of course. This talk is being recorded for a later podcast – and so it should be. Not because my words are particularly worthy of preservation, but rather because this should now be standard operating procedure for education at all levels, for all subject areas. It simply makes no sense to waste my words – literally, pouring them away – when with very little infrastructure an audio recording can be made, and, with just a bit more infrastructure, a video recording can be made.

This is the basic idea that’s guiding Stanford and MIT: recording is cheap, lecturers are expensive, and students are forgetful. Somewhere in the middle these three trends meet around recorded media. Yes, a student at Stanford who misses a lecture can download and watch it later, and that’s a good thing. But it also means that any student, anywhere, can download the same lecture.

Yes, recording everything means you end up with a wealth of media that must be tracked, stored, archived, referenced and so forth. But that’s all to the good. Every one of these recordings has value, and the more recordings you have, the larger the horde you’re sitting upon. If you think of it like that – banking your work – the logic of capturing everything becomes immediately clear.

Recommendation #2: Share Everything

While education definitely has value – teachers are paid for the work – that does not mean that resources, once captured, should be tightly restricted to authorized users only. In fact, the opposite is the case: the resources you capture should be shared as broadly as can possibly be managed. More than just posting them onto a website (or YouTube or iTunes), you should trumpet their existence from the highest tower. These resources are your calling card, these resources are your recruiting tool. If someone comes across one of your lectures (or other resources) and is favorably impressed by it, how much more likely will they be to attend a class?

The center of this argument is simple, though subtle: the more something is shared, the more valuable it becomes. You extend your brand with every resource you share. You extend the knowledge of your institution throughout the Internet. Whatever you have – if it’s good enough – will bring people to your front door, first virtually, then physically.

If universities as illustrious (and expensive) as Stanford and MIT could both share their full courseware online, without worrying that it would dilute the value of the education they offer, how can any other institution hope to refute their example? Both voted with their feet, and both show a different way to value education – as experience. You can’t download experience. You can’t bottle it. Experience has to be lived, and that requires a teacher.

Recommendation #3: Open Everything

You will be approached by many vendors promising all sorts of wonderful things that will make the educational processes seamless and nearly magical for both educators and students. Don’t believe a word of it. (If I had a dollar for every gripe I’ve heard about Blackboard and WebCT, I’d be a very wealthy man.) There is no off-the-shelf tool that is perfectly equipped for every situation. Each tool tries to shoehorn an infinity of possibilities into a rather limited palette.

Rather than going for a commercial solution, I would advise you to look at the open-source solutions. Rather than buying a solution, use Moodle, the open-source, Australian answer to digital courseware. Going open means that as your needs change, the software can change to meet those needs. Given the extraordinary pressures education will be under over the next few years, openness is a necessary component of flexibility.

Openness is also about achieving a certain level of device-independence. Education happens everywhere, not just with your nose down in a book, or stuck into a computer screen. There are many screens today, and while the laptop screen may be the most familiar to educators, the mobile handset has a screen which is, in many ways, more vital. Many students will never be very computer literate, but every single one of them has a mobile handset, and every single one of them sends text messages. It’s the big of computer technology we nearly always overlook – because it is so commonplace. Consider every screen when you capture, and when you share; dealing with them all as equals will help you work find audiences you never suspected you’d have.

There is a third aspect of openness: open networks. Educators of every stripe throughout Australia are under enormous pressure to “clean” the network feeds available to students. This is as true for adult students as it is for educators who have a duty-of-care relationship with their students. Age makes no difference, apparently. The Web is big, bad, evil and must be tamed.

Yet net filtering throws the baby out with the bathwater. Services like Twitter get filtered out because they could potentially be disruptive, cutting students off from the amazing learning potential of social messaging. Facebook and MySpace are seen as time-wasters, rather than tools for organizing busy schedules. The list goes on: media sites are blocked because the schools don’t have enough bandwidth to support them; Wikipedia is blocked because teachers don’t want students cheating.

All of this has got to stop. The classroom does not exist in isolation, nor can it continue to exist in opposition to the Internet. Filtering, while providing a stopgap, only leaves students painfully aware of how disconnected the classroom is from the real world. Filtering makes the classroom less flexible and less responsive. Filtering is lazy.

Recommendation #4: Only Connect

Mind the maxim of the 21st century: connection is king. Students must be free to connect with instructors, almost at whim. This becomes difficult for instructors to manage, but it is vital. Mentorship has exploded out of the classroom and, through connectivity, entered everyday life. Students should also be able to freely connect with educational administration; a fruitful relationship will keep students actively engaged in the mechanics of their education.

Finally, students must be free to (and encouraged to) connect with their peers. Part of the reason we worry about lecturers being overburdened by all this connectivity is because we have yet to realize that this is a multi-lateral, multi-way affair. It’s not as though all questions and issues immediately rise to the instructor’s attention. This should happen if and only if another student can’t be found to address the issue. Students can instruct one another, can mentor one another, can teach one another. All of this happens already in every classroom; it’s long past time to provide the tools to accelerate this natural and effective form of education. Again, look to RateMyProfessors.com – it shows the value of “crowdsourced” learning.

Connection is expensive, not in dollars, but in time. But for all its drawbacks, connection enriches us enormously. It allows us to multiply our reach, and learn from the best. The challenge of connectivity is nowhere near as daunting as the capabilities it delivers. Yet we know already that everyone will be looking to maintain control and stability, even as everything everywhere becomes progressively reshaped by all this connectivity. We need to let go, we need to trust ourselves enough to recognize that what we have now, though it worked for a while, is no longer fit for the times. If we can do that, we can make this transition seamless and pleasant. So we must embrace sharing and openness and connectivity; in these there’s the fluidity we need for the future.

Synopsis: Sharing :: Hyperconnectivity

The Day TV Died

On the 18th of October in 2004, a UK cable channel, SkyOne, broadcast the premiere episode of Battlestar Galactica, writer-producer Ron Moore’s inspired revisioning of the decidedly campy 70s television series. SkyOne broadcast the episode as soon as it came off the production line, but its US production partner, the SciFi Channel, decided to hold off until January – a slow month for television – before airing the episodes. The audience for Battlestar Galactica, young and technically adept, made digital recordings of the broadcasts as they went to air, cut out the commercials breaks, then posted them to the Internet.

For an hour-long television programme, a lot of data needs to be dragged across the Internet, enough to clog up even the fastest connection. But these young science fiction fans used a new tool, BitTorrent, to speed the bits on their way. BitTorrent allows a large number of computers (in this case, over 10,000 computers were involved) to share the heavy lifting. Each of the computers downloaded pieces of Battlestar Galactica, and as each got a piece, they offered it up to any other computer which wanted a copy of that piece. Like a forest of hands each trading puzzle pieces, each computer quickly assembled a complete copy of the show.

All of this happened within a few hours of Battlestar Galactica going to air. That same evening, on the other side of the Atlantic, American fans watched the very same episode that their fellow fans in the UK had just viewed. They liked what they saw, and told their friends, who also downloaded the episode, using BitTorrent. Within just a few days, perhaps a hundred thousand Americans had watched the show.

US cable networks regularly count their audience in hundreds of thousands. A million would be considered incredibly good. Executives for SciFi Channel ran the numbers and assumed that the audience for this new and very expensive TV series had been seriously undercut by this international trafficking in television. They couldn’t have been more wrong. When Battlestar Galactica finally aired, it garnered the biggest audiences SciFi Channel had ever seen – well over 3 million viewers.

How did this happen? Word of mouth. The people who had the chops to download Battlestar Galactica liked what they saw, and told their friends, most of whom were content to wait for SciFi Channel to broadcast the series. The boost given the series by its core constituency of fans helped it over the threshold from cult classic into a genuine cultural phenomenon. Battlestar Galactica has become one of the most widely-viewed cable TV series in history; critics regularly lavish praise on it, and yes, fans still download it, all over the world.

Although it might seem counterintuitive, the widespread “piracy” of Battlestar Galactica was instrumental to its ratings success. This isn’t the only example. BBC’s Dr. Who, leaked to BitTorrent by a (quickly fired) Canadian editor, drummed up another huge audience. It seems, in fact, that “piracy” is good. Why? We live in an age of fantastic media oversupply: there are always too many choices of things to watch, or listen to, or play with. But, if one of our friends recommends something, something they loved enough to spend the time and effort downloading, that carries a lot of weight.

All of this sharing of media means that the media titans – the corporations which produce and broadcast most of the television we watch – have lost control over their own content. Anything broadcast anywhere, even just once, becomes available everywhere, almost instantaneously. While that’s a revolutionary development, it’s merely the tip of the iceberg. The audience now has the ability to share anything they like – whether produced by a media behemoth, or made by themselves. YouTube has allowed individuals (some talented, some less so) reach audiences numbering in hundreds of millions. The attention of the audience, increasingly focused on what the audience makes for itself, has been draining ratings away from broadcasters, a drain which accelerates every time someone posts something funny, or poignant, or instructive to YouTube.

The mass media hasn’t collapsed, but it has been hollowed out. The audience occasionally tunes in – especially to watch something newsworthy, in real-time – but they’ve moved on. It’s all about what we’re saying directly to one another. The individual – every individual – has become a broadcaster in his or her own right. The mechanics of this person-to-person sharing, and the architecture of these “New Networks”, are driven by the oldest instincts of humankind.

The New Networks

Human beings are social animals. Long before we became human – or even recognizably close – we became social. For at least 11 million years, before our ancestors broke off from the gorillas and chimpanzees, we cultivated social characteristics. In social groups, these distant forbears could share the tasks of survival: finding food, raising young, and self-defense. Human babies, in particular, take many years to mature, requiring constantly attentive parenting – time stolen away from other vital activities. Living in social groups helped ensure that these defenseless members of the group grew to adulthood. The adults who best expressed social qualities bore more and healthier children. The day-to-day pressures of survival on the African savannahs drove us to be ever more adept with our social skills.

We learned to communicate with gestures, then (no one knows just how long ago) we learned to speak. Each step forward in communication reinforced our social relationships; each moment of conversation reaffirms our commitment to one another, every spoken word an unspoken promise to support, defend and extend the group. As we communicate, whether in gestures or in words, we build models of one another’s behavior. (This is why we can judge a friend’s reaction to some bit of news, or a joke, long before it comes out of our mouths.) We have always walked around with our heads full of other people, a tidy little “social network,” the first and original human network. We can hold about 150 other people in our heads (chimpanzees can manage about 30, gorillas about 15, but we’ve got extra brains they don’t to help us with that), so, for 90% of human history, we lived in tribes of no more than about 150 individuals, each of us in constant contact, a consistent communication building and reinforcing bonds which would make us the most successful animals on Earth. We learned from one another, and shared whatever we learned; a continuity of knowledge passed down seamlessly, generation upon generation, a chain of transmission that still survives within the world’s indigenous communities. Social networks are the gentle strings which connect us to our origins.

This is the old network. But it’s also the new network. A few years ago, researcher Mizuko Ito studied teenagers in Japan, to find that these kids – all of whom owned mobile telephones – sent as many as a few hundred text messages, every single day, to the same small circle of friends. These messages could be intensely meaningful (the trials and tribulations of adolescent relationships), or just pure silliness; the content mattered much less than that constant reminder and reinforcement of the relationship. This “co-presence,” as she named it, represents the modern version of an incredibly ancient human behavior, a behavior that had been unshackled by technology, to span vast distances. These teens could send a message next door, or halfway across the country. Distance mattered not: the connection was all.

In 2001, when Ito published her work, many dismissed her findings as a by-product of those “wacky Japanese” and their technophile lust for new toys. But now, teenagers everywhere in the developed world do the same thing, sending tens to hundreds of text messages a day. When they run out of money to send texts (which they do, unless they have very wealthy parents), they simply move online, using instant messaging and MySpace and other techniques to continue the never-ending conversation.

We adults do it too, though we don’t recognize it. Most of us who live some of our lives online, receive a daily dose of email: we flush the spam, answer the requests and queries of our co-workers, deal with any family complaints. What’s left over, from our friends, more and more consists of nothing other than a link to something – a video, a website, a joke – somewhere on the Internet. This new behavior, actually as old as we are, dates from the time when sharing information ensured our survival. Each time we find something that piques our interest, we immediately think, “hmm, I bet so-and-so would really like this.” That’s the social network in our heads, grinding away, filtering our experience against our sense of our friends’ interests. We then hit the “forward” button, sending the tidbit along, reinforcing that relationship, reminding them that we’re still here – and still care. These “Three Fs” – find, filter and forward – have become the cornerstone of our new networks, information flowing freely from person-to-person, in weird and unpredictable ways, unbounded by geography or simultaneity (a friend can read an email weeks after you send it), but always according to long-established human behaviors.

One thing is different about the new networks: we are no longer bounded by the number of individuals we can hold in our heads. Although we’ll never know more than 150 people well enough for them to take up some space between our ears (unless we grow huge, Spock-like minds) our new tools allow us to reach out and connect with casual acquaintances, or even people we don’t know. Our connectivity has grown into “hyperconnectivity”, and a single individual, with the right message, at the right time, can reach millions, almost instantaneously.

This simple, sudden, subtle change in culture has changed everything.

The Nuclear Option

On the 12th of May in 2008, a severe earthquake shook a vast area of southeast Asia, centered in the Chinese state of Sichuan. Once the shaking stopped – in some places, it lasted as long as three minutes – people got up (when they could, as may lay under collapsed buildings), dusted themselves off, and surveyed the damage. Those who still had power turned to their computers to find out what had happened, and share what had happened to them. Some of these people used so-called “social messaging services”, which allowed them to share a short message – similar to a text message – with hundreds or thousands of acquaintances in their hyperconnected social networks.

Within a few minutes, people on every corner of the planet knew about the earthquake – well in advance of any reports from Associated Press, the BBC, or CNN. This network of individuals, sharing information each other through their densely hyperconnected networks, spread the news faster, more effectively, and more comprehensively than any global broadcaster.

This had happened before. On 7 July 2005, the first pictures of the wreckage caused by bombs detonated within London’s subway system found their way onto Flickr, an Internet photo-sharing service, long before being broadcast by BBC. A survivor, waking past one of the destroyed subway cars, took snaps from her mobile and sent them directly on to Flickr, where everyone on the planet could have a peek. One person can reach everyone else, if what they have to say (or show) merits such attention, because that message, even if seen by only one other person, will be forwarded on and on, through our hyperconnected networks, until it has been received by everyone for whom that message has salience. Just a few years ago, it might have taken hours (or even days) for a message to traverse the Human Network. Now it happens a few seconds.

Most messages don’t have a global reach, nor do they need one. It is enough that messages reach interested parties, transmitted via the Human Network, because just that alone has rewritten the rules of culture. An intemperate CEO screams at a consultant, who shares the story through his network: suddenly, no one wants to work for the CEO’s firm. A well-connected blogger gripes about problems with his cable TV provider, a story forwarded along until – just a half-hour later – he receives a call from a vice-president of that company, contrite with apologies and promises of an immediate repair. An American college student, arrested in Egypt for snapping some photos in the wrong place at the wrong time, text messages a single word – “ARRESTED” – to his social network, and 24 hours later, finds himself free, escorted from jail by a lawyer and the American consul, because his network forwarded this news along to those who could do something about his imprisonment.

Each of us, thoroughly hyperconnected, brings the eyes and ears of all of humanity with us, wherever we go. Nothing is hidden anymore, no secret safe. We each possess a ‘nuclear option’ – the capability to go wide, instantaneously, bringing the hyperconnected attention of the Human Network to a single point. This dramatically empowers each of us, a situation we are not at all prepared for. A single text message, forwarded perhaps a million times, organized the population of Xiamen, a coastal city in southern China, against a proposed chemical plant – despite the best efforts of the Chinese government to sensor the message as it passed through the state-run mobile telephone network. Another message, forwarded around a community of white supremacists in Sydney’s southern suburbs, led directly to the Cronulla Riots, two days of rampage and attacks against Sydney’s Lebanese community, in December 2005.

When we watch or read stories about the technologies of sharing, they almost always center on recording companies and film studios crying poverty, of billions of dollars lost to ‘piracy’. That’s a sideshow, a distraction. The media companies have been hurt by the Human Network, but that’s only a minor a side-effect of the huge cultural transformation underway. As we plug into the Human Network, and begin to share that which is important to us with others who will deem it significant, as we learn to “find the others”, reinforcing the bonds to those others every time we forward something to them, we dissolve the monolithic ties of mass media and mass culture. Broadcasters, who spoke to millions, are replaced by the Human Network: each of us, networks in our own right, conversing with a few hundred well-chosen others. The cultural consensus, driven by the mass media, which bound 20th-century nations together in a collective vision, collapses into a Babel-like configuration of social networks which know no cultural or political boundaries.

The bomb has already dropped. The nuclear option has been exercised. The Human Network brought us together, and broke us apart. But in these fragments and shards of culture we find an immense vitality, the protean shape of the civilization rising to replace the world we have always known. It all hinges on the transition from sharing to knowing.

The Nuclear Option

I.

One of the things I find the most exhilarating about Australia is the relative shallowness of its social networks. Where we’re accustomed to hearing about the “six degrees of separation” which connect any two individuals on Earth, in Australia we live with social networks which are, in general, about two levels deep. If I don’t know someone, I know someone who knows that someone.

While this may be slightly less true across the population as a whole (I may not know a random individual living in Kalgoorlie, and might not know someone who knows them) it is specifically quite true within any particular professional domain. After four years living in Sydney, attending and speaking at conferences throughout the nation, I’ve met most everyone involved in the so-called “new” media, and a great majority of the individuals involved in film and television production.

The most consequential of these connections sit in my address book, my endless trail of email, and my ever-growing list of Facebook friends. These connections evolve into relationships as we bat messages back and forth: emails and text messages, and links to the various interesting tidbits we find, filter and forward to those we imagine will gain the most from this informational hunting & gathering. Each transmission reinforces the bond between us – or, if I’ve badly misjudged you, ruptures that bond. The more we share with each other, the stronger the bond becomes. It becomes a covert network; invisible to the casual observer, but resilient and increasingly important to each of us. This is the network that carries gossip – Australians are great gossipers – as well as insights, opportunities, and news of the most personal sort.

In a small country, even one as geographically dispersed as Australia, this means that news travels fast. This is interesting to watch, and terrifying to participate in, because someone’s outrageous behavior is shared very quickly through these networks. Consider Roy Greenslade’s comments about Andrew Jaspan, at Friday’s “Future of Journalism” conference, which made their way throughout the nation in just a few minutes, via “live” blogs and texts, getting star billing in Friday’s Crikey. While Greenslade damned Jaspan, I was trapped in studio 21 at ABC Ultimo, shooting The New Inventors, yet I found out about his comments almost the moment I walked off set. Indeed, connected as I am to individuals such as Margaret Simmons and Rosanne Bersten (both of whom were at the conference) it would have been more surprising if I hadn’t learned about it.

All of this means that we Australians are under tremendous pressure to play nice – at least in public. Bad behavior (or, in this case, a terrifyingly honest assessment of a colleague’s qualifications) so excites the network of connections that it propagates immediately. And, within our tight little professional social networks, we’re so well connected that it propagates ubiquitously. Everyone to whom Greenslade’s comments were salient heard about them within a few minutes after he uttered them. There was a perfect meeting between the message and its intended audience.

That is a new thing.

II.

Over the past few months, I have grown increasingly enamoured with one of the newest of the “Web2.0” toys, a site known as “Twitter”. Twitter originally billed itself as a “micro-blogging” site: you can post messages (“tweets”, in Twitter parlance) of no more than 140 characters to Twitter, and these tweets are distributed to a list of “followers”. Conversely, you are sent the tweets created by all of the individuals whom you “follow”. One of the beauties of Twitter is that it is multi-modal; you can send a tweet via text message, through a web page, or from an ever-growing range of third-party applications. Twitter makes it very easy for a bright young programmer to access Twitter’s servers – which means people are now doing all sorts of interesting things with Twitter.

At the moment, Twitter is still in the domain of the early-adopters. Worldwide, there are only about a million Twitter users, with about 200,000 active in any week – and these folks are sending an average of three million tweets a day. That may not sound like many people, but these 200,000 “Twitteratti” are among the thought-leaders in new media. Their influence is disproportionate. They may not include the CIOs of the largest institutions in the world, but they do include the folks whom those CIOs turn to for advice. And whom do these thought-leaders turn to for advice? Twitter.

A simple example: When I sat down to write this, I had no idea how many Twitter users there are at present, so I posted the following tweet:

Question: Does anyone know how many Twitter users (roughly) there are at present? Thanks!

Within a few minutes, Stilgherrian (who writes for Crikey) responded with the following:

There are 1M+ Twitter users, with 200,000 active in any week.

Stilgherrian also passed along a link to his blog where he discusses Twitter’s statistics, and muses upon his increasing reliance on the service.

Before I asked the Twitteratti my question, I did the logical thing: I searched Google. But Google didn’t have any reasonably recent results – the most recent dated from about a year ago. No love from Google. Instead, I turned to my 250-or-so Twitter followers, and asked them. Given my own connectedness in the new media community in Australia, I have, through Twitter, access to an enormous reservoir of expertise. If I don’t know the answer to a question – and I can’t find an answer online – I do know someone, somewhere, who has an answer.

Twitter, gossipy, noisy, inane and frequently meaningless, acts as my 21st-century brain trust. With Twitter I have immediate access to a broad range of very intelligent people, whose interests and capabilities overlap mine enough that we can have an interesting conversation, but not so completely that we have nothing to share with one another. Twitter extends my native capability by giving me a high degree of continuous connectivity with individuals who complement those capabilities.

That’s a new thing, too.

William Gibson, the science fiction author and keen social observer, once wrote, “The street finds its own use for things, uses the manufacturers never intended.” The true test of the value of any technology is, “Does the street care?” In the case of Twitter, the answer is a resounding “Yes!”. This personal capacity enhancement – or, as I phrase it, “hyperempowerment” – is not at all what Twitter was designed to do. It was designed to facilitate the posting of short, factual messages. The harvesting of the expertise of my oh-so-expert social network is a behavior that grew out of my continued interactions with Twitter. It wasn’t planned for, either by Twitter’s creators, or by me. It just happened. And not every Twitter user puts Twitter to this use. But some people, who see what I’m doing, will copy my behavior (which probably didn’t originate with me, though I experienced a penny-drop moment when I realized I could harvest expertise from my social network using Twitter), because it is successful. This behavior will quickly replicate, until it’s a bog-standard expectation of all Twitter users.

III.

On Monday morning, before I sat down to write, I checked the morning’s email. Several had come in from individuals in the US, including one from my friend GregoryP, who spent the last week sweating through the creation of a presentation on the value of social media. As many of you know, companies often hire outside consultants, like GregoryP, when the boss needs to hear something that his or her underlings are too afraid say themselves. Such was the situation that GregoryP walked into, with sadly familiar results. From his blog:

As for that “secret” company – it seems fairly certain to me that I won’t be working for any dot-com pure plays in the near future. As I touched on in my Twitter account, my presentation went well but the response to it was something more than awful. As far as I could tell, the generally-absent Director of the Company wasn’t briefed on who I was or why I was there, exactly – she took the opportunity to impugn my credibility and credentials and more or less acted as if I’d tried to rip her company off.

I immediately read GregoryP’s Twitter stream, to find that he had been used, abused and insulted by the MD in question.

Which was a big, big mistake.

GregoryP is not very well connected on Twitter. He’s only just started using it. A fun little website, TweetWheel, shows all nine of his connections. But two of his connections – to Raven Zachary and myself – open into a much, much wider world of Twitteratti. Raven has over 600 people following his tweets, and I have over 250 followers. Both of us are widely-known, well-connected individuals. Both of us are good friends with GregoryP. And both of us are really upset at bad treatment he received.

Here’s how GregoryP finished off that blog post:

Let’s just say it’ll be a cold day in hell before I offer any help, friendly advice or contacts to these people. I’d be more specific about who they are but I wouldn’t want to give them any more promotion than I already have.

What’s odd here – and a sign that the penny hasn’t really dropped – is that GregoryP doesn’t really understand that “promotion” isn’t so much a beneficial influence as a chilling threat to lay waste to this company’s business prospects. This MD saw GregoryP standing before her, alone and defenseless, bearing a message that she was of no mind to receive, despite the fact that her own staff set this meeting up, for her own edification.

What this unfortunate MD did not see – because she does not “get” social media – was Raven and myself, directly connected to GregoryP. Nor does she see the hundreds of people we connect directly to, nor the tens of thousands connected directly to them. She thought she was throwing her weight around. She was wrong. She was making an ass out of herself, behaving very badly in a world where bad behavior is very, very hard to hide.

All GregoryP need do, to deliver the coup de grace, is reveal the name of the company in question. As word spread – that is, nearly instantaneously – that company would find it increasingly difficult to recruit good technology consultants, programmers, and technology marketers, because we all share our experiences. Sharing our experiences improves our effectiveness, and prevents us from making bad decisions. Such as working with this as-yet-unnamed company.

The MD walked into this meeting believing she held all the cards; in fact, GregoryP is the one with his finger poised over the launch button. With just a word, he could completely ruin her business. This utter transformation in power politics – “hyperconnectivity” leading to hyperempowerment – is another brand new thing. This brand new thing is going to change everything it touches, every institution and every relationship any individual brings to those institutions. Many of those institutions will not survive, because their reputations will not be able to withstand the glare of hyperconnectivity backed by the force of hyperempowerment.

The question before us today is not, “Who is the audience?”, but rather, “Is there anyone who isn’t in the audience?” As you can now see, a single individual – anywhere – is the entire audience. Every single person is now so well-connected that anything which happens to them or in front of them reaches everyone it needs to reach, almost instantaneously.

This newest of new things has only just started to rise up and flex its muscles. The street, ever watchful, will find new uses for it, uses that corporations, governments and institutions of every stripe will find incredibly distasteful, chaotic, and impossible to manage.