Mothers of Innovation

Introduction:  Olden Days

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store.  I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system.  Prominently displayed at the front of the store, I spied my first Macintosh.  It wasn’t known as a Mac 128K or anything like that.  It was simply Macintosh.  I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse.  Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen.  But how do I get it to do anything? I wondered.  Click.  Nothing.  Click, drag – oh look some of these things changed color!  But now what?  Gah.  This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure.  Graphical computing isn’t intuitive, that’s a bit of a marketing fib.  It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment.  The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010.  Twenty-six years ago, it was a different story.  You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods.  Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers.  (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.)  We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends.  Which I did.  Every weekend.  The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh.  Presented with a perfectly white screen, you dragged the mouse along the display.  The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker.  For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation.  It was, in the vernacular of Boston, ‘wicked’.  I couldn’t stop playing with Hendrix.  I invited friends over, showed them, and they couldn’t stop playing with Hendrix.  Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for.  Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity.  I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity.  When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity.  That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G).  I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us.  So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I:  The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system.  I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death.  (And, if I know how to read a stack trace, that was probably caused by Flash.  Go figure.)  OSX is solid, it’s modestly secure, and it has plenty of eye candy.  My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words.  Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication.  This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET.  Phase one: build the network.  Phase two: connect everything to the network.  Phase three: PROFIT!

That seems to have worked out pretty much according to plan.  Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected.  All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing.  Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom.  Blame our brains.  They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies.  We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it?   Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there.  Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts.  They take chaos and organize it.  Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals.  They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds.  All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention.  We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity.  The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things?  All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them?  Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload?  Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure.  If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser.  Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on.  This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on.  I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else.  I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea.  Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last 'social message manager' that anyone will need.  It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging.  But I’m not coy.  I’ll come right out and say it: Plexus is an open-source project, and I need some help here.  That’s a reflection of the fact that we all need some help here.  We’re being clubbed into submission by our connectivity.  I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves.  That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve.  Finally, some time to think.  Some time to be.  Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds.  We’re too busy attending to the next message.  We need to disconnect periodically, and focus on the real.  We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others.  We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward.  It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become.  The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another.  We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent.  My unit was going on the auction block – would I mind moving out?  Immediately?  I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance.  The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird.  There were no signs posted in my building, no indication that either of the units were for rent.  I’d heard nothing from the few neighbors I know well enough to chat with.  They didn’t know either.  Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it.  Why?  Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs.  One of the pleasures of the city is its anonymity.  That’s also one of it’s great dangers.  The two go hand-in-hand.  Yet the world of 2010 does not offer up this kind of anonymity easily.  Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again.  How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to?  Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device.  It is the place where our of the human relationships reside.  (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.)  Yet the mobile is more than just the social crossroads.  It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView.  I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long.  As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be.  The mobile is my networked, location-aware sensor.  It gathers up all of the information I need to ease my path through life.  This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment.  The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack.  It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention.  After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time.  I asked her, “When is your mobile ever more than a few meters away from you?  How much difference would it make?  What do you gain by sticking it underneath your skin?”  I didn’t even bother to mention the danger from all that subcutaneous microwave radiation.  It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be.  Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential.  It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection.  We’re addicted, and the first step to treating that addiction is to admit we have  problem.  But here’s the dilemma: we're working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers.  And I’m encouraging you to make better crack.  Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it.  It’s an accommodation we will need to make, a give-and-take.  We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there's more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop.  I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration.  When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place.  This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole.  There’s clearly a need for it, but that’s just the beginning.  This is automated, computational decision making.  It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra.  When I arrived there, at 6 pm, I wondered where to have dinner.  Canberra is not known for its scintillating nightlife – I had no idea where to dine.  I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests?  Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters?  The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves.  All of it comes together in our hands.  The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together.  It becomes hypnotizing and formidable, though it rings less and less.  Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket.  Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live.  When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network.  Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations.  I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming.  That’s the potential that we hold in the palms of our hands.  That’s the world you can bring to life.

III:  Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions.  There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001.  But it never happened.  Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience.  It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years.  It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface.  Yet that alone was not enough.  I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated.  But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months.  That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone.  iPad is not simply a gargantuan iPod Touch.  The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience:  I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece.  I had about 48 hours to do the work.  I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them.  Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting.  Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account.  Once uploaded, I could access those files from my iPad – all thousand or so pages.  Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written.  This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate.  iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper.  That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago.  The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world.  It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone.  The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear.  So stretch your mind further than the idea of the app.  Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it.  That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there.  Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model.  That will come so that Apple can enable FaceTime.  (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.)  Once the iPad can see you, it can respond to you.  It can acknowledge your presence in an authentic manner.  We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in.  We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it.  But that’s not where they’re looking, this is not about keeping you entertained.  The iPad can entertain you, but that’s not its main design focus.  It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures.  At that point it is no longer a mirror; it is an entity on its own.  It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby.  It will become our constant companion, helping us through every situation.  And it will move seamlessly between our devices, from iPad to iPhone to desktop.  But it will begin on iPad.

Because we are just starting out with tablets, anything is possible.  We haven’t established expectations which guide us into a particular way of thinking about the device.  We’ve had mobiles for nearly twenty years, and desktops for thirty.  We understand both well, and with that understanding comes a narrowing of possibilities.  The tablet is the undiscovered country, virgin, green, waiting to be explored.  This is the desktop revolution, all over again.  This is the mobile revolution, all over again.  We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet.  I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer.  I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing.  So think big.  This is the time for it.  In another few years it will be difficult to aim for the stars.  The platform will be carrying too much baggage.  Right now we all get to be rocket scientists.  Right now we get to play, and dream, and make it all real.

Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

Crowdsource Yourself

I: Ruby Anniversary

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

I’ll jump to the chase: that roomful of academics at the Fall Joint Computer Conference saw the first broadly networked system featuring raster displays – the forerunner of all displays in use today; windowing; manipulation of on-screen information using a mouse; document storage and manipulation using the first hypertext system ever demonstrated, and videoconferencing between Engelbart, demoing in San Francisco, and his colleagues 30 miles away in Menlo Park.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!