Hyperconnected Education

I: Connect / Disconnect

Recently, I had the opportunity to teach a lecture at the University of Sydney.  I always consider teaching a two-way street: there’s an opportunity to learn as much from my students as they learn from me.  Sometime I simply watch what they do, learning from that what new behaviors are spreading across the culture.  Other times I take advantage of a captive audience to run an ethnographic survey.  With eighty eager students ready to do my bidding, I worked up a few questions about mobile usage patterns within their age group.

First, I ascertained their ages – ranging mostly between eighteen and twenty-two, with a cluster around nineteen and twenty.  Then I asked them, “How old were you when you first got a mobile of your own?”  One student got her first mobile at nine years of age, while the oldest waited until they were nineteen.  An analysis of the data shows that half of the students owned a mobile at around eleven and a half years old.

When I shared this result with some colleagues on Twitter, they responded, “That seems a bit old.”  And it does – precisely because these students are, on average, eight and a half years older than when they got their first mobile.  This survey looks back into 2003 – the year that I arrived in Australia – rather than at the present moment.

Another survey, conducted last year, shows how much has changed, so quickly.  Thirty-seven percent of children between Kindergarten and Year 2 have their own mobile (of some sort), with one fifth having access to a smartphone.  By Year 8, that figure has risen to eighty-five percent, with fully one-third using smartphones.

Since the introduction of the mobile, thirty years ago, the average age of first ownership has steadily dropped.  For many years the device was simply too expensive to be given to any except children from the wealthiest families.  Today, an Android smartphone can be purchased outright for little more than a hundred dollars, and thirty dollars a months in carriage.  With the exception of the poorest Australians, price is no longer a barrier to mobile ownership.  As the price barrier dropped, the age of first mobile ownership has also tumbled, from eleven years old in 2003, to something closer to eight today.

The resistance to mobile ownership in the sub-eight-year-old set will only be overcome as the devices themselves become more appropriate to children with less developed cognitive skills.  Below age eight, the mobile morphs from a general-purpose communications device to a type of networked tether, allowing the parent and the child to remain in a constant high state of situational awareness about each other’s comings and goings.  Only a few mobiles have been designed to serve the needs of the young child.  The market has not been mature enough to support a broad array of child-friendly devices, nor have the carriers developed plans which make mobile ownership in that age group an attractive option.  This will inevitably happen, and from the statistics, that day can not be very far off: the resistance to the mobile in this age group will be designed away.

There is no real end in sight.  The younger the child, the more the mobile assumes the role of the benevolent watcher, a sensor continually reporting the condition of the child to the parent.  We already use radio-frequency baby monitors to listen to our children as they fuss in their cribs; a mobile provides the same capability by different means.  This sensor will also track the child’s heartbeat, temperature, and other vital statistics, will grow smaller and less-power hungry, until – at some point in the next fifteen years, a child have receive their first mobile moments after they pop out of the womb.  That mobile will be integrated into the hospital tag slipped around their foot.

It is an absolute inevitability that sometime within the next decade, every single child entering primary school will come bearing their own mobile.  They will join the rest of us in being hyperconnected – directly and immediately connected to everyone of importance to them.  Why should Australian children be any different than any of the rest of us?  Mobile subscription rates in Australia exceed 120% – more than one per person, even counting all those currently too young or too old to use a mobile.  Within a generation, being human and being connected will be seen to be synonymous.

The next years are an interregnum, the few heartbeats between the ‘before time’ – when none of us were connected – and a thoroughly hyperconnected afterward.  This is the moment when we must make the necessary pedagogical and institutional adjustments to a pervasively connected culture.  That survey from last year found that even at Kindergarten level, two-thirds of parents were willing to buy a mobile for their children – if schools integrated the device into their pedagogy.  But the survey also pointed to opposition within the schools themselves:

“When we asked administrators about the likelihood of them allowing their students to use their own mobile devices for instructional purposes at school this year, a resounding 65% of principals said “no way!”

School administrators overwhelmingly hold the comforting belief that the transition into hyperconnectivity can be prevented, forestalled, or simply banned.  A decade ago most schools banned the mobile; within the last few years, mobiles have been permitted with specific restrictions around how and when they can be used.  A few years from now, there will be no effective way to silence the mobile, anywhere (except in specific instances I will speak to later), because so much of our children’s lives will have become contingent upon the continuous connection it affords.

Like King Canute, we can not hold back the tide.  We must prepare for the rising waters.  We must learn to swim within the sea of connectivity.  Or we will drown.

II: Share / Overshare

When people connect, they begin to share.  This happens automatically, an expression of the instinctive human desire to communicate matters of importance.  Give someone an open channel and they’ll transmit everything they see that they think could be of any interest to anyone else.  At the beginning, this sharing can look quite unfocused – bad jokes and cute kittens – but as time passes, we teach one another those things we consider important enough to share, by sharing them.  Sharing, driven by need, amplified by technology, reaches every one of us, through our network of connections.  We both give and receive: from each according to their knowledge, to each, according to their need.

Sharing has amplified the scope of our awareness.  We can find and connect to others who share our interests, increasing our awareness of those interests.  The parent-child bond is the most essential of all our interests, so parents are loading their children up with the technologies of connection, gaining a constant ‘situational awareness’ of a depth which makes them the envy of ASIO.  The mobile tether becomes eyes and ears and capability, both lifeline and surrogate.  The child uses the mobile to share experiences – both actively and passively – and the parent, wherever they may be, ‘hovers’, watching and guiding.

This ‘helicopter parenting’ was difficult to put into practice before hyperconnectivity, because vigilance required presence.  The mobile has cut the cord, allowing parental hypervigilance to become a pervasive feature of the educational environment.  As the techniques for this electronic hypervigilance become less expensive and easier to use, they will become the accepted practice for child raising.

Intel Fellow and anthropologist Dr. Genevieve Bell spent a day in a South Korean classroom a few years ago, interviewing children whose parents had given them mobiles with GPS tracking capabilities – so those parents always knew the precise location of their child.  When Bell asked the students if they found this constant monitoring threatening, one set of students pointed to another student, who didn’t have a tracking mobile, saying,  “Her parents don’t love her enough to care where she is.”  In the context of the parent-child bond, something that appears Orwellian transforms into the ultimate security blanket.

A friend in Sydney has a child in Kindergarten, a precocious boy who finds the classroom environment alternately boring and confronting.  She’s been called in to speak with the teacher a few times, because of his disruptive behavior – behavior he links to bullying by another classmate.  The teacher hasn’t seen the behaviour, or perhaps thinks it doesn’t merit her attention, leaving the boy increasingly frustrated, dreading every day at school.

In conversation with my friend, I realized that her child felt alone and undefended in the classroom.  How might that change?  Imagine that before he left for school, his mother affixes a small button to his school uniform, perhaps on the collar.  This button would have a camera, microphone and mobile transmitter within it, continuously recording and transmitting a live stream directly from the child’s collar to the parent’s mobile – all day long.  The child wouldn’t have to set it up, or do anything at all.  It would simply work – and my friend would have eyes and ears wherever her child went.  If there was trouble – bullying, or anything else – my friend would see it as it happened, and would be able to send a recording along to her son’s teacher.

This is not science fiction.  It is not even far way.  Every smartphone has all of the technology needed to make this happen.  Although a bit bulkier than I’ve described, it could all be done today.  Not long ago, I purchased a $50 toy ‘spy watch’, which records 20 minutes of video.  My friend could equip her son with that toy asking him to record anything he thought important.  Shrinking it down to the size of a button and adding mobile capability will come in time.  When such a device hits the market, parents will find it irresistible – because it finally gives them eyes in the back of their head.

We need to ask ourselves whether this technological tethering is good for either parent or child.  Psychologist Sherry Turkel, who has explored the topic of children and technology longer than anyone else, believes that this constant close connectivity keeps the child from exploring their own boundaries, artificially extending the helplessness of childhood by amplifying the connection between parent and child.  Connection has consequence: to be connected is to be affected by that connection.  A small child might gain a sense of freedom with an electronic tether, but an adolescent might have a dependency on that connection that could interfere with their adult development.  Because hyperconnectivity is such a recent condition, we don’t have the answers to these questions.  But these questions need to be asked.

This connection has broad consequences for educators.  Two years ago I heard a teacher in Victoria relate the following story: In a secondary school classroom, one student had failed to turn in their assignment.  This wasn’t the first time it had happened, so the teacher had a bit of a go at the student.  As the teacher harangued the student, he reached into his knapsack, pulled out his mobile, and punched a few buttons.  When the connection was made, he said, “You listen to the bitch,” and held the phone away from his face, toward his teacher.

Connection and sharing rewire and short-circuit the relationships we have grown accustomed to within the classroom.  How can a teacher maintain discipline while constantly being trumped by a child tethered to a hypervigilant parent?  How can a child gain independence while so tethered?  How can a parent gain any peace of mind while constantly monitoring the activities of their child?  All of these new dynamics are persistent features of the 21st-century classroom.  All are already present, but none are as yet pervasive.  We have some time to think about our response to hyperconnectivity.

III: Learn / You, Me, and Everyone We Know

A few years ago, both Stanford University and the Massachusetts Institute of Technology (my alma mater) made the revolutionary decision to publicly post all of lesson plans, homework assignments, and recordings of lectures for all classes offered at their schools.  Why, some wondered aloud, would anyone pay the $40,000 a year in tuition and fees, when you could get the content for nothing?  This question betrays a fundamental misunderstanding of education in the 21st century: knowledge is freely available, but the mentoring which makes that knowledge apprehensible and useful remains a precious resource.  It’s wonderful to have a lesson plan, but it’s essential to be able to work with someone who has a deep understanding of the material.

This is the magic in education, the je ne sais quois that makes it a profoundly human experience, and stubbornly resistant to automation.  We have no shortage of material: nearly four million English language articles in Wikipedia, 2500 videos on Khan Academy (started, it should be noted, by an MIT educator), tens of thousands of lessons in everything from cooking to knitting to gardening to home renovation on YouTube, and sites like SkillShare, which connect those who have specialist knowledge to those who want it.  Yet, even with this embarrassment of riches, we still yearn for the opportunity to conspire, to breathe the same air as our mentor, while they, by the fact of their presence, transmit mastery.  If this sounds a wee bit mystical, so be it: education is the most human of all our behaviors, and we do not wholly understand the why and how of it.

Who shall educate the educators?  All of the materials so far created have been affordances for students, to make their lives easier.  If it helps an educator, that’s a nice side benefit, but never the main game.  In this sense, nearly all online educational resources are profoundly populist, pointing directly to the student, ignoring both educators and educational institutions.  Hyperconnectivity has removed all of the friction which once made it difficult to connect directly to students, but has thus far ignored the teacher.  In the back of a classroom, students can tap on a mobile and correct the errors in a teacher’s lecture, but can the teacher get peer review of that same material?  Theoretically, it should be easy.  In practice, we’re still waiting.

I recently had the good fortune to be a judge at Sydney Startup Weekend, where technology entrepreneurs pitch their ideas, then spend 48 frenetic hours bringing them to life.  The winning project, ClassMate, directly addresses the Educator-to-Educator (E2E) connection.  Providing a platform for a teacher to upload and store their lesson plans, ClassMate allows teachers share those lesson plans within a school, a school system, or as broadly as desired – even charge for them.

This kind of sharing gives every teacher access to a wealth of lesson plans on a broad variety of topics.  As the National Curriculum rolls out over the next few years, the taxonomy of subject areas within it can act as an organizing taxonomy for the sharing of those lesson plans.  Searching through thousands of lesson plans would not simply be a splayed view based on keywords, like a Google search, but rather something more highly specific and focused, drawn from the arc of the National Curriculum.  With the National Curriculum as an organizing principle, the best lesson plans for any particular node within the Curriculum will quickly rise to the top.

This means that every teacher in Australia (and the world) will soon have access to the best class materials from the best teachers throughout Australia.  Teachers will be able to spend more time interacting with students as the hard slog of creating lecture materials becomes a shared burden.  Yet teachers are no different from their students; the best lesson plan is in itself insufficient.  Teachers need to work with other teachers, need to be mentored by other teachers, need to conspire with other teachers, in order to make their own pedagogical skills commensurate with the resources on offer to them.  Professional development must go hand-in-hand with an accelerated sharing of knowledge, lest this sharing only amplify the imbalances between classrooms and between schools.

Victoria’s Department of Education and Early Childhood Development has the ULTRANET, designed to facilitate the sharing of materials between teachers, students and parents.  ULTRANET is not particularly user-friendly, presenting a barrier to its widespread acceptance by a community of educators who may not be broadly comfortable with technology.  Educational sharing systems must be designed from the perspective of those who use them – teachers and students – and not from a set of KPIs on a tick sheet.  One reason why I have high hopes for ClassMate is that the designer is himself a primary school teacher in New South Wales, solving a problem he faces every day.

Sharing between educators creates a platform for a broader sharing between students and educators.  At present almost all of that sharing happens inside the classroom and is immediately lost.  We need to think about how to capture our sharing moments, making them available to students.  Consider the recording device I mentioned earlier – although it works nicely for a child in Kindergarden, it becomes even more useful for someone preparing for an HSC/VCE exam, giving them a record of the mentoring they received.  This too can be shared broadly, where relevant (and often where it isn’t relevant at all, but funny, or silly, or sad, or what have you), so that everything is captured, everywhere, and shared with everyone.

If this sounds a bit like living in a fishbowl, I can only recommend that you get used to it.  Educators will be hit particularly hard by hyperconnectivity, because they spend their days working with students who have never known anything else.  Students copy from one another, teachers borrow from teachers, administrations and departments imitate what they’ve seen working in other schools and other states.  This is how it has always been, but now that this is no longer difficult, it is accelerating wildly, transforming the idea of the classroom.

IV: All Together Now

Let’s now turn to the curious case of David Cecil, a 25 year-old unemployed truckie from Cowra, arrested by the Australian Federal Police on a series of charges which could see him spend a decade imprisoned.  With nothing but time on his hands, and the Internet at his fingertips, Cecil found the thing he found most interesting, found the others interested in it, and listened to what they said.  The Internet became a classroom, and the people he connected to became his mentors.  Dedicated to learning, online for as much as twenty hours a day, Cecil took himself from absolute neophyte to practiced expert in just a few months, an autodidactic feat we can all admire in theory, if somewhat less in practice.  On the 27th of July, Cecil was arrested during a dawn AFP raid on his home, charged with breaking into and obtaining control over the computer systems of Internet service provider Platform Networks.

Cecil might have gotten away with it, but his ego wot dun him.  Cecil went back to the same bulletin boards and chat sites where he learned his ‘1337 skills’, and bragged about his exploits.  Given that these boards are monitored by the forces of law and order, it was only a matter of time before the inevitable arrest.  While it might seem the very apex of stupidity to publicly brag about breaking the law, the desire to share what we know – and be seen as an expert – frequently overrules our sense of self-preservation.  We are born to share what we know, and wired to learn from what others share.  That’s no less true for ourselves than for that ideal poster child for Constructivism, David Cecil.

Now that this figurative internal wiring has become external and literal, now that the connections no longer end with our family, friends and colleagues, but extend pervasively and continuously throughout the world, we have the capability, in principle, of learning anything known by anyone anywhere, of gaining the advantage of their experiences to guide us through our own.  We have for the first time the possibility of some sort of collective mind – not in a wacky science-fiction sense, but with something so mundane it barely rates a mention today – Wikipedia.

In its 3.7 million articles, Wikipedia offers up the factual core of human knowledge – not always perfectly, but what it loses in perfection it makes up for in ubiquity.  Every person with a smartphone now walks around with the collected essence of human knowledge in their hands, accessible within a few strokes of a fingertip.  This is unprecedented, and means that we now have the capability to make better decisions than ever before, because, at every step along the way, we can refer to this factual base, using it to guide us into doing the best we can at every moment.

That is the potential for this moment, but we do not yet operate in those terms.  We teach children to research, as if this were an activity distinct from the rest of our experience, when, in reality, research is the core activity of the 21st century.  We need to think about the era, just a few years hence, when everyone has a very smart and very well connected mobile in hand from birth.  We need to think about how that mobile becomes the lever which moves the child into knowledge.  We need to think about our practice and how it is both undermined and amplified by the device and the network it represents.

If we had to do this as individuals – or even within a single school administration – we would quickly be overwhelmed by the pace of events beyond the schoolhouse walls.  To be able to encounter this accelerating tsunami of connection and empowerment we must take the medicine ourselves, using the same tools as our students and their parents.  We have agency, but only when we face the future squarely, and admit that everything we once knew about the logic of the classroom – its flows of knowledge and power – has gone askew, and that our future lies within the network, not in opposition to it.

In ten years time, how many administrators will say “No way!”, when asked if the mobile has a place in their curriculum?  (By then, it will equivalent to asking if reading has a place in the curriculum.)  This is the stone that must be moved, the psychological block that dams connectivity and creates a dry, artificial island where there should instead be a continuous sea of capability.  The longer that dam remains in place, the more force builds up behind it.  Either we remove the stone ourselves, or the pressures of a hyperconnected world will simply rip through the classroom, wiping it away.

Your students are not alone on their journey into knowledge and mastery.  Beside them, educators blaze a new trail into a close connectivity, leveraging a depth of collective experience to accelerate the search for solutions.  We must search and research and share and learn and put that learning into practice.  We must do this continuously so we can stay in front of this transition, guiding it toward meaningful outcomes for both students and educators.  We must reinvent education while hyperconnectivity reinvents us.

CODA: Disconnect

Finally, let me also be a Devil’s Advocate.  Connectivity is amazing and wonderful and empowering, but so is its opposite.  In fifteen years we have moved from a completely disconnected culture into a completely connected culture.  We believe, a priori, that connection is good.  Yet connection comes with a cost.  To be connected is to be deeply involved with another, and outside one’s self.  This is fine – some of the time.  But we also need a space where we are wholly ourselves, contingent upon no one else.

Our children and our students do not know this.  The value of silence and quiet may seem obvious to us, but they have never lived in a disconnected culture.  They only know connection.  Being disconnected frightens them – both because of its unfamiliarity, and because it seems to hold within it the possibly of facing dangers without the assistance of others.  Furthermore, this generation has no positive role model of disconnection to look to.  They see their parents responding to text messages at the dinner table, answering emails from in front of the television, running for the mobile, every time it rings.  Parents have no boundaries around their connectivity; by their actions, this is what they have taught to their children.

Educators must instill some basic rules – a ‘hygiene of connectivity’ in the next generation.   We need to highlight disconnection as something to be longed for, a positive feature of life.  We need to teach them ways to manage their connectivity, so that they become the master of their connections, not servants.  And we need to be able to set the example in our own actions.  If we do that, we can give the next generation an important insight into how to be whole in a hyperconnected world.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.

The Unfinished Project: Exploration, Learning and Networks

I: The Educational Field

We live today in the age of networks.  Having grown from nothing just fifteen years ago, the network has become one of the principal influences in our lives.  We trust the network; we depend on the network; we use the network to make ourselves more effective.  This state of affairs did not develop gradually; rather, we have passed through a series of unpredicted and non-linear shifts in the fabric of culture.

The first of these shifts was coincident with the birth of the Web itself, back in the mid-1990s.  From its earliest days the Web was alluring because it represented all things to all people: it could serve as both resource and repository for anything that might interest us, a platform for whatever we might choose to say.  The truth of those earliest days is that we didn’t really know what we wanted to say; the stereotype of the page where one went on long and lovingly about one’s pussy carries an echo of that search for meaning.   The lights were on, but nobody was home.

Drawing the curtain on this more-or-less vapid era of the Web, the second shift began with the collapse of the dot-com bubble in the early 2000s.  The undergrowth cleared away, people could once again focus on the why of the Web.  This was when the Web came into its own as an interactive medium.  The Web could have been an interactive medium from day one – the technology hadn’t changed one bit – but it took time for people to map out the evolving relationship between user and experience.  The Web, we realized, is not a page to read, but rather, a space for exploration, connection and sharing.

This is when things start to get interesting, when ideas like Wikipedia begin to emerge.  Wikipedia is not a technology, at least, it’s not a specific technology.  Wikis have been around since 1995, nearly as old as the Web itself.  Databases are older than the Web, too.  So what is new about Wikipedia?  Simply this: the idea of sharing.  Wikipedia invites us all to share from our expertise, for the benefit of one another.  It is an agreement to share what we know to collectively improve our capability.  If you strip away all of the technology, and all of the hype – both positive and negative –from Wikipedia, what you’re left with is this agreement to share.  In the decade since Wikipedia’s launch we’ve learned to share across a broad range of domains.  This sharing supported by technology is a new thing, and dramatically increases the allure of the network.  What was merely very interesting back in 1995 became almost overpowering in the years since the turn of the millennium.  It has consistently become harder and harder to imagine a life without the network, because the network provides so much usefulness, and so much utility.

The final shift occurred in 2007, as Facebook introduced F8, its plug-in architecture which opened its design – and its data – to outside developers.  Facebook exploded from a few million users to over four hundred million: the third largest nation in the world.  Social networks are significant because they harness and amplify our innate human desire and capability to connect with one another.  We constantly look to our social networks – that is, our real-world networks – to remind us who we are, where we are, and what we’re doing.  These social network provide our ontological grounding.  When translated into cyberspace, these social networks can become almost impossibly potent – which is why, when they’re used to bully or harass someone, they can lead to such disastrous results.  It becomes almost too easy, and we become almost too powerful.

A lot of what we’ll see in this decade is an assessment of what we choose to do with our new-found abilities.  We can use these social networks to transmit pornographic pictures of one another back and forth at such frequency and density that we simply numb ourselves into a kind of fleshy hypnosis.  That is one possible direction for the future.  Or, we could decide that we want something different for ourselves, something altogether more substantial and meaningful.  But in order to get that sort of clarity, we need to be very clear on what we want – both direction and outcome.  At this point we are simply playing around – with a loaded weapon – hoping that it doesn’t accidentally go off.

Of course it does; someone sets up a Facebook page to memorialize a murdered eight year-old, but leaves the door open to all comers (believing, unrealistically, that others will share their desire to mourn together), only to see the overflowing sewage of the Internet spill bile and hatred and psychopathology onto a Web page.  This happens again and again; it happened several times in one week in February.  We are not learning the lesson we are meant to learn.  We are missing something.  Partly this is because it is all so new, but partly it is because we do not know what our own intentions are.  Without that, without a stated goal, we can not winnow the wheat from the chaff.  We will forget to close the windows and lock the doors.  We will amuse ourselves to death.

I mention this because, as educators, it is up to all of us to act as forces for the positive moral good of the culture as a whole.  Cultural values are transmitted by educators; and while parents may be a bigger influence, teachers have their role to play.  Parents are simply overwhelmed by all of this novelty – the Web wasn’t around when they were children, and social networks weren’t around even five years ago.  So, right at this moment in time, educators get to be the adult cultural vanguard, the vital mentoring center.

If we had to do this ourselves, alone, as individuals – or even as individual institutions – the project would almost certainly fail.  After all, how could we hope to balance all of the seductions ‘out there’ against the sense which needs to be taught ‘in here’?  We would simply be overwhelmed – our current condition.  Fortunately, we are as well connected, at least in potential, as any of our students.  We have access to better resources.  And we have more experience, which allows us to put those resources to work.  In short, we are far better placed to make use of social media than our charges, even if they seem native to the medium while we profess to be immigrants.

One thing that has changed, because of the second shift, the trend toward sharing, is that educational resources are available now as never before.  Wikipedia led the way, but it is just small island in a much large sea of content, provided by individuals and organizations throughout the world.  iTunes University, YouTube University, the numberless podcasts and blogs that have sprung up from experts on every subject from macroeconomics to the history of Mesoamerica – all of it searchable by Google, all of it instantaneously accessible – every one of these points to the fact that we have clearly entered a new era, where we are surrounded by and saturated with an ‘educational field’ of sorts.  Whatever you need to know, you’re soaking in it.

This educational field is brand-new.  No one has made systematic use of it, no teacher, no institution, no administration.  But that doesn’t lessen its impact.  We all consult Wikipedia when we have some trivial question to answer; that behavior is the archetype for where education is headed in the 21st century – real-time answers on-demand, drawn from the educational field.

Paired with the educational field is the ability for educators to establish strong social connections – not just with other educators, but laterally, through the student to the parents, through the parents to the community, and so on, so that the educator becomes ineluctably embedded in a web of relationships which define, shape and determine the pedagogical relationship.  Educators have barely begun to make use of the social networking tools on offer; just to have a teacher ‘friend’ a student in Facebook is, to some eyes, a cause for concern – what could possibly be served by that relationship, one which subverts the neat hierarchy of the 19th century classroom?

The relationship is the essence of the classroom, that which remains when all the other trivia of pedagogy are stripped away.  The relationship between the teacher and the student is at the core of the magical moment when knowledge is transmitted between the generations.  We now have the greatest tool ever created by the hand of man to reinforce and strengthen that relationship.  And we need to use it, or else we will all sink beneath a rising tide of noise and filth and distraction.

But how?

II: The Unfinished Project

The roots of today’s talk lie in a public conversation I had with Dr. Evan Arthur, who manages the Digital Education Revolution Group within the Department of Education, Employment and Workplace Relations.  As part of this conversation, I asked him about educational styles, and, in particular, Constructivism.  As conceived by Jean Piaget and his successors across the 20th century, Constructivism states that the child learns through play – or rather, through repeated interactions with the world.  Schema are created by the child, put to the test, where they either succeed or fail.  Failed schema are revised and re-tested, while successful schema are incorporated into ever-more-comprehensive schema.  Through many years of research we know that we learn the physics of the real world through a constant process of experimentation.  Every time a toddler dumps a cup of juice all over himself, he’s actually conducting an investigation into the nature of the real.

The basic tenets of Constructivism are not in dispute, although many educators have consistently resisted the underlying idea of Constructivism – that it is the child who determines the direction of learning.  This conflicts directly with the top-down teacher-to-student model of education which we are all intimate familiar with, which has determined the nature of pedagogy and even the architecture of our classrooms.  This is the grand battle between play and work; between ludic exploration and the hard grind of assimilating the skills that situate us within an ever-more-complex culture.

At the moment, this trench warfare has frozen us in a stalemate located, for the most part, between year two and year three.  In the first two years education has a strong ludic component, and students are encouraged to explore.  But in year three the process becomes routinized, formalized and very strict.  Certainly, eight-year-olds are better able to understand restrictions than six-year-olds.  They’re better at following the rules, at colouring within the lines.  But it seems as though we’ve taken advantage of the fact that an older child is a more compliant one.  It is true that as we advance in years, our ludic nature becomes tempered by an adult’s sensibility.  But humans retain the urge to play throughout their lives – to a greater degree than any other species we know of.  It could very well be that our ability to learn is intimately tied to our desire to play.

If we are prepared to swallow this bitter pill, and acknowledge that play is an essential part of the learning process, we have no choice but to follow this idea wherever it leads us.  Which leads me back to my conversation with Dr. Arthur.  I asked him about the necessity of play, and he framed his response by talking about “The Unfinished Constructivist Project”.  It is a revolution trapped in mid-stride, a revelation that, somehow, hasn’t penetrated all the way through our culture.  We still insist that instruction is the preferred mechanism for education, when we have ample evidence to suggest this simply isn’t true.  Let me be clear: instruction is not the same thing as guidance.  I am not suggesting that children simply do as they please.  The more freedom they have, the more need they have for a strong, stabilizing force to guide them as they explore.  This may be the significant (if mostly hidden) objection to the Constructivist project: it is simply too expensive.  The human resources required to give each child their own mentor as they work their way through the corpus of human knowledge would simply overwhelm any current educational model, with the exception of homeschooling.  I don’t know what the student-teacher ratio would need to be in a fully realized Constructivist educational system, but I doubt that twenty-to-one would be sufficient.  That’s the level needed to maintain a semblance of order, more a peacekeeping force than an army of mentors.

There have been occasional attempts to create a fully Constructivist educational system, but these, like the manifold utopian communities which have been founded, flourish briefly, then fade or fracture, and do not survive the test of time.  The level of dedication and involvement required from both educator/mentors and parents is simply too big an ask.  This is the sort of thing that a hunter-gatherer culture has no trouble with: the entire world is the classroom, the child explores it, and an adult is always there to offer an explanation or story to round out the child’s knowledge.  We live in an industrial culture (at least, our classrooms do), where there is strict differentiation between ‘education’ and the other activities in life, where adults are ‘educators’ or they are not, where everything is highly formal, almost ritualized.  (Consider the highly regulated timings of the school day – equal parts order from chaos, and ritual.)  There could never be enough support within such a framework to sustain a Constructivist model.  This is why we have the present stalemate; we know the right thing to do, but, heretofore, we have lacked the resources to actualize this knowledge.

That has now changed.

The educational field must be recognized as the key element which will power the unfinished Constructivist revolution.  The educational field does not recognize the boundaries of the classroom, the institution, or even the nation.  It is simply pervasive, ubiquitous and available as needed.  Within that field, both students and educator/mentors can find all of the resources needed to make the Constructivist project a continuing success.  There need be no rupture between years two and three, no transformation of educational style from inward- to outward-directed.  Instead, there can and should be a continual deepening of the child’s exploration of the corpus of knowledge, under the guidance of a network of mentors who share the burden.  We already have most of the resources in place to assure that the child can have a continuous and continually strengthening relationship with knowledge: Wikipedia, while not perfect, points toward the kinds of knowledge sharing systems which will become both commonplace and easily created throughout the 21st century.

Sharing needs to become a foundational component in a modern educational system.  Every time a teacher finds a resource to aid a student in their exploration, that should be noted and shared broadly.  As students find things on their own – and they will be far better at it than most educators – these, too, should be shared.  We should be creating a great, linked trail behind us as we learn, so that others, when exploring, will have paths to guide them – should they choose to follow.  We have systems that can do this, but we have not applied these systems to education – in large part because this is not how we conceive of education.  Or rather, this is not how we conceive of education in the classroom.  I do a fair bit of corporate consulting, and this sort of ‘knowledge capture’ and ‘knowledge management’ is becoming essential to the operation of a 21st century business.  Many businesses are creating their own, ad-hoc systems to share knowledge resources among their staff, as they understand how important this is for professional development.

This is a new battle line opened up in the war between the unfinished constructivist project and the older, more formal methods of education.  The corporate world doesn’t have time for methodologies which have become obsolete.  Employees must be constantly up-to-date.  Professionals – particularly doctors and lawyers – must remain continuously well-informed about developments in their respective fields.  Those in management need real-time knowledge streams in order recognize and solve problems as they emerge.  This is all much more ludic than formal, much more self-directed than guided, much more juvenile than adult – even though these are all among the most adult of all activities.  This disjunction, this desynchronization between the needs of the world-at-large and the delivery capabilities of an ever-more-obsolete educational system is the final indictment of things-as-they-are.  Things will change; either education will become entirely corporatized, or educators will wholly embrace the unfinished Constructivist project.  Either way the outcome will be the same.

Fortunately, the educational field has something else to offer educators beyond the near-infinite supply of educational resources.  It is a network of individuals.  It is a social network, connected together via bonds of familiarity and affinity.  The student is embedded in a network with his mentors; the mentors are connected to other students, and to other mentors; everyone is connected to the parents, and the community.  In this sense, the formal space of the ‘classroom’ collapses, undone by the pressure provided by the social network, which has effectively caused the classroom walls to implode.  The outside world wants to connect to what happens within the crucible of the classroom, or, more specifically, with the magical moment of knowledge transference within the student’s mind.  This is what we should be building our social networks to support.  At present, social networks like Facebook and Twitter are dull, unsophisticated tools, capable of connecting together, but completely inadequate when it comes to shaping that connection around a task – such as mentoring, or exploring knowledge.  A second generation of social networks is already reaching release.  These tools display a more sophisticated edge, and will help to support the kinds of connections we need within the educational field.

None of this, as wonderful as it might sound (and I admit that it may also seem pretty frightening) is happening in a vacuum.  There are larger changes afoot within Australia, and no vision for the future of education in Australia could ignore them.  We must find a way to harmonize those changes with the larger, more fundamental changes overtaking the entire educational system.

III: The National Curriculum

Underlying fear of a Constructivist educational project is that it would simply give children an excuse to avoid the tough work of education.  There is a persistent belief that children will simply load up on educational ‘candy’, without eating their all-so-essential ‘vegetables’, that is, the basic skills which form the foundation for future learning.  Were children left entirely to their own devices, there might be some danger of this – though, now that we live in the educational field, even that possibility seems increasingly remote.  Children do not live in isolation: they are surrounded by adults who want them to grow into successful adults.  In prehistoric times, adults simply had to be adults around children for the transference of life-skills to take place.  Children copied, imitated, and aped adults – and still do.  This learning-by-mimesis is still a principle factor in the education of the child, though it is not one which is often highlighted by the educational system.  Industrial culture has separated the adult from the child, putting one into the office, the other into the school.  That separation, and the specialization which is the hallmark of the Industrial Age, broke the natural and persistent mentorship of parenting into discrete units: this much in the home, this much in the school.  If we do not trust children to consume a nourishing diet of knowledge, it is because we do not trust ourselves to prepare it for them.  The separation by function led to a situation where no one is responsible for the whole thread of the life.  Parents look to teachers.  Teachers look to parents.  Everyone, everywhere, looks to authority for responsible solutions.

There is no authority anywhere.  Either we do this ourselves, or it will not happen.  We have to look to ourselves, build the networks between ourselves, reach out and connect from ourselves, if we expect to be able to resist a culture which wants to turn the entire human world into candy.  This is not going to be easy; if it were, it would have happened by itself.  Nor is it instantaneous.  Nothing like this happens overnight.  Furthermore, it requires great persistence.  In the ideal situation, it begins at birth and continues on seamlessly until death.  In that sense, this connected educational field mirrors and is a reflection of our human social networks, the ones we form from our first moments of awareness.  But unlike that more ad-hoc network, this one has a specific intent: to bring the child into knowledge.

Knowledge, of course, is very big, very vague, mostly undefined.  Meanwhile, there are specific skills and bodies of knowledge which we have nominated as important: the ability to read and write; to add and subtract, multiply and divide; a basic understanding of the physical and living worlds; the story of the nation and its peoples.  These have very recently been crystallized in a ‘National Curriculum’, which seeks to standardize the pedagogical outcomes across Australia for all students in years 1 through 10.  Parents and educators have already begun to argue about the inclusion or exclusion of elements within that curriculum.  I was taught phonics over forty years ago, but apparently it’s still a matter of some debate.  The teaching of history is always going to be contentious, because the story we tell ourselves about who we are is necessarily political.  So the adults will argue it out – year after year, decade after decade – while the educators and students face this monolithic block of text which seems to be the complete antithesis of the Constructivist project.  And, looked at one way, the National Curriculum is exactly the type of top-down, teacher-to-student, sit-down-and-shut-up sort of educational mandate which is no longer effective in the business world.

All of which means its probably best that we avoid viewing up the National Curriculum as a validation, encouraging us to continue on with things as they are.  Instead, it should be used as mandate for change.  There are several significant dimensions to this mandate.

First, putting everyone onto the same page, pedagogically, opens up an opportunity for sharing which transcends anything before possible.  Teachers and students from all over Australia can contribute to or borrow from a wealth of resources shared by those who have passed before them through the National Curriculum.  Every teacher and every student should think of themselves as part of a broader collective of learners and mentors, all working through the same basic materials.  In this sense, the National Curriculum isn’t a document so much as it is the architecture of a network.  It is the way all things educational are connected together.  It is the wiring underneath all of the pedagogy, providing both a scaffolding and a switchboard for the learning moment.

Is it possible to conceive of a library organized along the lines of the National Curriculum?  Certainly a librarian would have no problem configuring a physical library to meet the needs of the curriculum.  It’s even easier to organize similar sorts of resources in cyberspace.  Not only is it easy, there’s now a mandate to do so.  We know what sorts of resources we’ll need, going forward.  Nothing should be stopping us from creating collective resources – similar to an Australian Wikipedia, and perhaps drawing from it – which will serve the pedagogical requirements of the National Curriculum.  We should be doing this now.

Second, we need to think of the National Curriculum as an opportunity to identify all of the experts in all of the areas covered by the curriculum, and, once they’ve been identified, we must create a strong social network, with them inside, giving them pride of place as ‘nodes of expertise’.  Knowledge is not enough; it must be paired with mentors who have been able to put that knowledge into practice with excellence.  The National Curriculum is the perfect excuse to bring these experts together, to make them all connected and accessible to everyone throughout the nation who could benefit from their wisdom.

Here, once again, it is best to think of the National Curriculum not as a document but as a network – a way to connect things, and people, together.  The great strength of the National Curriculum is, as Dr. Evan Arthur put it, that it is a ‘greenfields’.  Literally anything is possible.  We can go in any direction we choose.  Inertia would have us do things as we’ve always done them, even as the centrifugal forces of culture beyond the classroom point in a different direction.  Inertia can not be a guiding force.  It must be resisted, at every turn, not in the pursuit of some educational utopia or false revolution, but rather because we have come to realize that the network is the educational system.

Moving from where we are to where need to be seems like a momentous transition.  But the Web saw repeated momentous transitions in its first fifteen years and we managed all of those successfully.  We can absorb huge amounts of change and novelty so long as the frame which supports us is strong and consistent.  That’s the essence of the parent-child relationship: so long as the child feels it is being cared for, it can endure almost anything.  This means that we shouldn’t run around freaking out.  The sky is not falling.  The world is not ending.  If anything, we are growing closer together, more connected, becoming more important to one another.  It may feel a bit too close from time to time, as we learn how to keep a healthy distance in these new relationships, but that closeness supports us all.  It can keep children from falling through the net of opportunity.  It can see us advance into a culture where every child has the full benefit of an excellent education, without respect to income or circumstance.

That is the promise.  We have the network.  We live in the educational field.  We now have the National Curriculum to wire it all together.  But can we marry the demands of the National Curriculum with the ludic call of Constructivism?  Can we create a world where literally we play into learning?  This is more than video games that have math drills embedded into them.  It’s about capturing the interests of a child and using that as a springboard for the investigation of their world, their nation, their home.  That can only happen if mentors are deeply involved and embedded in the child’s life from its earliest years.

I don’t have any easy answers here.  There is no magic wand to wave over this whole uncoordinated mess to make it all cohere.  No one knows what’s expected of them anymore – educators least of all.  Are we parents?  Are we ‘friends’?  Where do we stand?  I know this: we stand most securely when we stand connected.

Nexus

I: Sharing

This is the era of sharing. When the histories of our time are written a hundred years from now, sharing is the salient feature which historians will focus upon. The entirety of culture, from 1999 forward, looks like a gigantic orgy of sharing.

This morning I want to take a look at this phenomenon in some detail, and tie it into some Australian educational ‘megatrends’ – forces which are altering the landscape throughout the nation. Sharing can be used as an engine to power these forces, but that will only happen if we understand how sharing works.

At some level, sharing is totally familiar to us – we’ve been sharing since we’ve been very small. But sharing, at least in the English language, has two slightly different meanings: we can share things, or we can share thoughts. We adults spend a lot of time teaching children the importance of sharing their things; we never need to teach them to share their thoughts. The sharing of things is a cultural behavior, valued by our civilization, whereas the sharing of thoughts is an innate behavior – probably located somewhere deep in our genes.

Fifteen years ago, Nicholas Negroponte characterized this as the divide between bits and atoms. We have to teach children to share their atoms – their toys and games – but they freely share their bits. In fact, they’re so promiscuous with their bits that this has produced its own range of problems.

It was only a decade ago that Shawn Fanning released a program which he’d written for his mates at Boston’s Northeastern University. Napster allowed anyone with a computer and a broadband internet connection to share their MP3 music files freely. Within a few months, millions of broadband-connected college students were freely trading their music collections with one another – without any thought of copyright or ownership. Let me reiterate: thoughts of copyright or piracy simply didn’t enter into their thinking. To them, this was all about sharing.

This act of sharing was a natural consequence of the ‘hyperconnectivity’ these kids had achieved via their broadband connections. When you connect people together, they will begin to share the things they care about. If you build a system that allows them to share the music they care about, they’ll share that. If you build a system that allows them to share the videos they care about, they’ll share that. If you build a system that allows them to share the links they care about, they’ll share them.

Clever web developers and entrepreneurs have built all of these systems, and many, many more. For the first time we can use technology to accelerate and amplify the innate human desire to share bits, and so, in a case of history repeating itself, we have amplified our social and sharing systems the way the steam engine amplified our physical power two hundred years ago.

In the earliest years of this sharing revolution, people shared the objects of culture: music, videos, jokes, links, photos, writing, and so on. Just this alone has had an enormous impact on business and culture: the recording industries, which were flying high a decade ago, have been humbled. Television networks have gotten in front of the Internet distribution of their own shows, to take the sting out of piracy. Newspapers, caught in the crossfire between a controlled system of distribution and a world where everyone distributes everything, have begun to disappear. And this is just the beginning.

In 2001, another experiment in sharing started in earnest: Wikipedia encouraged a small community of contributors to add their own entries to an ever-expanding encyclopedia. In this case contributors were asked to share their knowledge – however specific or particular – to a greater whole. Although it grew slowly in its earliest days, after about 2 years Wikipedia hit an inflection point and began to grow explosively.

Knowledge seems to have a gravitational quality; when enough of it is gathered together in one place, it attracts more knowledge. That’s certainly the story of Wikipedia, which has grown to encompass more than three million articles in English, on nearly every topic under the sun. Wikipedia is only the most successful of many efforts to produce a ‘collective intelligence’ out of the ‘wisdom of crowds’. There are many others – including one I’ll come to shortly.

One of the singular features of Wikipedia – one that we never think about even though it’s the reason we use Wikipedia – is simply this: Wikipedia makes us smarter. We can approach Wikipedia full of ignorance and leave it knowing a lot of facts. Facts need to be put into practice before they can be transformed into knowledge, but at least with Wikipedia we now have the opportunity to load up on the facts. And this is true globally: because of Wikipedia every single one of us now has the opportunity to work with the best possible facts. We can use these facts to make better decisions, decisions which will improve our lives. Wikipedia may seem innocuous, but it’s really quite profound.

How profound? If we peel away all of the technology behind Wikipedia, all of the servers and databases and broadband connections of the world’s sixth most popular website, what are we left with? Only this: an agreement to share what we know. It’s that agreement, and not the servers or databases or bandwidth which makes Wikipedia special, and it’s that agreement historians will be writing about in a hundred years. That agreement will endure – even if, for some bizarre reason, Wikipedia should cease to exist – because that agreement is one of the engines driving our culture forward.

Another example of sharing, just as relevant to educators, comes from a site which launched back in 1999 as TeacherRatings.com. Like Wikipedia, it grew slowly, and went through ownership changes, emerging finally as RateMyProfessors.com, which is owned by MTV, and which now boasts ten million ratings of one million professors, lecturers and instructors. This huge wealth of ratings came about because RateMyProfessors.com attached itself to the innate desire to share. Students want to share their experiences with their instructors, and RateMyProfessors.com gives them a forum to do just that.

Just as is the case with Wikipedia, anyone can become smarter by using RateMyProfessors.com. You can learn which instructors are good teachers, which grade easily, which will bore you to tears, and so forth. You can then put that information to work to make your life better – avoiding the professors (or schools) which have the worst teachers, taking courses from the instructors who get the highest scores.

That shared knowledge, put to work, changes the power balance within the university. For the last six hundred years, universities have been able to saddle students with lousy instructors – who might happen to be fantastic researchers – and there wasn’t much that students could do about it except grumble. Now, with RateMyProfessors.com, students can pass their hard-won knowledge down to subsequent generations of students. The university proposes, the student disposes. Worse still, the instructors receiving the highest ratings on RateMyProfessors.com have been the subjects of bidding wars, as various universities try to woo them, and add them to their faculties. All of this has given students a power they’ve never had, a power they never could have until they began to share their experiences, and translate that shared knowledge into action.

Sharing is wonderful, but sharing has consequences. We can now amplify and accelerate our sharing so that it can cross the world in a matter of moments, copied and replicated all the way. The power of the network has driven us into a new era. Sharing culture, knowledge, and power has destabilized all of our institutions. Businesses totter and collapse; universities change their practices; governments create task forces to get in front of what everyone calls ‘something-2.0’. It could be web2.0, education2.0, or government2.0. It doesn’t matter. What does matter is that something big is happening, and it’s all driven by our ability to share.

OK, so we can share. But why? How does it matter to us?

II: Greenfields

Before we can look at why sharing matters so much in this particular moment, we need to spend some time examining the three big events which will revolutionize education in Australia over the next decade. Each of them are entirely revolutionary in themselves; their confluence will result in a compressed wave of change – a concrescence – that will radically transform all educational practice.

The first of these events will affect all Australians equally. At this moment in time, Australia lives with medium-to-low-end broadband speeds, and most families have broadband connections which, because of metering, fundamentally limit their use. This is how it’s been since the widespread adoption of the Internet in the mid-1990s, and it’s nearly impossible to imagine that things could be different. The hidden lesson of the last fifteen years is that the Internet is something that needs to be rationed carefully, because there’s not enough to go around.

The Government wants us to adopt a different point of view. With the National Broadband Network (NBN), they intend to build a fibre-optic infrastructure which will deliver at least 100 megabit-per-second connections to every home, every school, and every business in Australia. Although no one has come out and said it explicitly, it’s clear that the Government wants this connection to be unmetered – the Internet will finally be freely available in Australia, as it is in most other countries.

How this will change our usage of the Internet is anyone’s guess. And this is the important point – we don’t know what will happen. We have critics of the NBN claiming that there’s no good reason for it, that Australians are already adequately served by the broadband we’ve already got, but I regularly hear stories of schools which block YouTube – not because of its potentially distracting qualities, but because they can’t handle the demand for bandwidth.

That, writ large, describes Australia in 2009. Broadband is the oxygen of the 21st century. Australia has been subjected to a slow strangulation. Once we can breathe freely, new horizons will open to us. We know this is true from history: no one really knew what we’d do with broadband once we got it. No one predicted Napster or YouTube or Skype, no one could have predicted any of them – or any of a thousand other innovations – before we had widespread access to broadband. Critics who argue there’s no need for high-speed broadband have simply failed to learn the lessons of history.

Now, before you think that I’m carrying the Government’s water, let me find fault with a few things. I believe that the Government isn’t thinking big enough – by the time the NBN is fully deployed, around 2017, a hundred megabit-per-second connection will simply be mid-range among our OECD peers. The Government should have accepted the technical challenge and gone for a gigabit network. Eventually, they will. Further, I believe the NBN will come with ‘strings attached’, specifically the filtering and regulatory regime currently being proposed by Senator Conroy’s ministry. The Government wants to provide the nation a ‘clean feed’, sanitized according to its interpretation of the law; when everyone in Australia gets their Internet service from the Commonwealth, we may have no choice in the matter.

The next event – and perhaps the most salient, in the context of this conference – is the Government’s commitment to provide a computer to every student in years 9 through 12. During the 2007 election, the Prime Minister talked about using computers for ‘math drills’ and ‘foreign language training’. The line about providing computers in the classroom was a popular one, although it is now clear that the Government’s ministers didn’t think through the profound effect of pervasive computing in the classroom.

First, it radically alters the power balance in the classroom. Most students have more facility with their computers than their teachers do. Some teachers are prepared to work from humility and accept instruction from their students. For other teachers, such an idea is anathema. The power balance could be righted somewhat with extensive professional development for the teachers – and time for that professional development – but schools have neither the budget nor the time to allow for this. Instead, the computers are being dumped into the classroom without any thought as to how they will affect pedagogy.

Second, these computers are being handed to students who may not be wholly aware of the potency of these devices. We’ve seen how a single text message, forwarded endlessly, can spark a riot on a Sydney beach, or how a party invitation, posted to Facebook, can lead to a crowd of five hundred and a battle with the police. Do teenagers really understand how to use the network to their advantage, how to reinforce their own privacy and protect themselves? Do they know how easy it is to ruin their own lives – or someone else’s – if they abuse the power of the network, that amplifier and accelerator of sharing?

Teachers aren’t the only ones who need some professional development. We need to provide a strong curriculum in ‘digital citizenship’; just as teenagers get instruction before they get a driver’s license, so they need instruction before they get to ‘spin the wheels’ of these ubiquitous educational computers.

This isn’t a problem that can be solved by filtering the networks at the schools. Students are surrounded by too many devices – mobiles as well as computers – which connect to the network and which require a degree of caution and education. This isn’t a job that the schools should be handling alone; this is an opportunity for all of the adult voices of culture – parents, caretakers, mentors, educators and administrators – to speak as one about the potentials and pitfalls of network culture.

Finally, what is the goal here? Right now the students and teachers are getting their computers. Next year the deployment will be nearly complete. What, in the end, is the point? Is it simply to give Kevin Rudd a tick on his ‘promises fulfilled’ list when he goes up for re-election? Or is this an opening to something greater? Is this simply more of the same or something new? I haven’t seen any educator anywhere present anything that looks at all like an integrated vision of what these laptops mean to students, teachers or the classroom. They’re bling: pretty, but an entirely useless accessory. I’m not saying that this is a bad initiative – indeed, I believe the Government should be lauded for its efforts. But everything, thus far, feels only like a beginning, the first meter around a very long course.

Now we come to the most profound of the three events on the educational horizon: the National Curriculum. Although the idea of a national curriculum has been mooted by several successive governments, it looks as though we’ll finally achieve a deliverable curriculum sometime in the early years of the Rudd Government. There’s a long way to go, of course – and a lot of tussling between the states and the various educational stakeholders – but the process is well underway. It’s expected that curricula in ‘English, Mathematics, the Sciences and History’ will be ready for implementation in the start of 2011, not very far away. As these are the core elements in any school curriculum, they will affect every school, every teacher, and every student in Australia.

A few weeks ago I got the opportunity to share the stage with Dr. Evan Arthur, the Group Manager of the Digital Education Group at the Commonwealth Department of Education, Employment and Workplace Relations. During a ‘fireside chat’, when I asked him a series of questions, the topic turned to the National Curriculum. At this point Dr. Arthur became rather thoughtful, and described the National Curriculum as a “greenfields”. He went on to describe the curriculum documents, when completed, as a set of ‘strings’ which could be handled almost as if they were a Christmas tree, ready to have content hung all over them. The National Curriculum means that every educator in Australia is, for the first time, working to the same set of ‘strings’.

That’s when I became aware that Dr. Arthur saw the National Curriculum as an enormous opportunity to redraw the possibilities for education. We are all being given an opportunity to start again – to throw out the old rule book and start over with another one. But in order to do this we’ll have to take everything we’ve covered already – about sharing, the National Broadband Network, the Digital Education Revolution and the National Curriculum, then blend them together. Together they produce a very potent mix, a nexus of possibilities which could fundamentally transform education in Australia.

III: At The Nexus

Our future is a future of sharing; we’ll be improving constantly, finding better and better ways to share with one another. To this I want to add something more subtle; not a change in technology – we have a lot of technology – but rather, a change of direction and intent. We could choose to see the National Curriculum as simply another mandate from the Federal government, something that will make the educational process even more formal, rigorous, and lifeless. That option is open to us – and, to many of us, that’s the only option visible. I want to suggest that there is another, wildly different path open before us, right next to this well-trodden and much more prosaic laneway. Rather than viewing the National Curriculum as a done deal, wouldn’t it be wiser if we consider it as an open invitation to participation and sharing?

After all, the National Curriculum mandates what must be taught, but says little to nothing about how it gets taught. Teachers remain free to pursue their own pedagogical ends. That said, teachers across Australia will, for the first time, be pursing the same ends. This opens up a space and a rationale for sharing that never existed before. Everyone is pulling in the same direction; wouldn’t it make sense for teachers, students, administrators and parents to share the experience?

Let’s be realistic: whether or not we seek to formalize this sharing of experience, it will happen anyway, on BoredOfStudies.org, RateMyTeachers.com, a hundred other websites, a thousand blogs, a hundred thousand Facebook profiles, and a million tweets. But if it all happens out there, informally, we miss an enormous opportunity to let sharing power our transition to into the National Curriculum. We’d be letting our greatest and most powerful asset slip through our fingers.

So let me turn this around and project us into a future where we have decided to formalize our shared experience of the National Curriculum. What might that look like? A teacher might normally prepare their curriculum and pedagogical materials at the beginning of the school term; during that preparation process they would check into a shared space, organized around the National Curriculum (this should be done formally, through an organization such as Education.AU, but could – and would – happen informally, via Google) to find out what other educators have created and shared as curriculum materials. Educators would find extensive notes, lesson plans, probably numerous recorded podcasts, links to materials on Wikipedia and other online resources, and so forth – everything that an educator might need to create an effective learning experience. Furthermore, educators would be encourage to share and connect around any particular ‘string’ in the National Curriculum. The curriculum thus becomes a focal point for organization and coordination rather than a brute mandate of performance.

Students, already well-connected, will continue to use informal channels to communicate about their lessons; the National Curriculum gives the educational sector (and perhaps some enterprising entrepreneur) an opportunity to create a space where those curriculum ‘strings’ translate into points of contact. Students working through a particular point in the curriculum would know where they are, and would know where to gather together for help and advice. The same wealth of materials available to educators would be available to students. None of this constitutes ‘peeking at the answers’, but rather is part of an integrated effort to give students every advantage while working their way through the National Curriculum. A student in Townsville might be able to gain some advantage from a podcast of a teacher in Albany, might want to collaborate on research with students from Ballarat, might ask some questions of an educator in Lismore. The student sits in the middle of an nexus of resources designed to offer them every opportunity to succeed; if the methodology of their own classroom is a poor fit to their learning style, chances are high that they’ll find someone else, somewhere else, who makes a better match.

All of this sounds a lot like an educational utopia, but all of it is within our immediate grasp. It is because we live at the confluence of a broadly sharing culture, and within a nation which is getting ubiquitous high-speed broadband, students and educators who now have pervasive access to computers, and a National Curriculum to act as an organizing principle. It is precisely because the stars are aligned so auspiciously that we can dream big dreams. This is the moment when anything is possible.

This transition could simply reinforce the last hundred years of industrial era education, where one-size-fits-all, where the student enters ‘airplane mode’ when they walk into the classroom – all devices disconnected, eyes up and straight ahead for the boredom of a fifty-minute excursion through some meaningless and disconnected body of knowledge. Where the computer simply becomes an electronic textbook for the distribution of media, rather than a portal for the exploration of the knowledge shared by others. Where the educator finds themselves increasingly bound to a curriculum which limits their freedom to find expression and meaning in their work. And all of this will happen, unless we recognize the other path that has opened before us. Unless we change direction, and set our feet on that path. Because if we keep on as we have been, we’ll simply end up with what we have today. And that would be a big mistake.

It needn’t be this way. We can take advantage of our situation, of the concrescence of opportunities opening to us. It will take some work, some time and some money. But more than anything else it requires a change of heart. We must stop thinking of the classroom as a solitary island of peace and quiet in the midst of a stormy sea, and rather think of it as a node within a network, connected and receptive. We must stop thinking of educators as valiant but solitary warriors, and transform them into a connected and receptive army. And we must recognize that this generation of students are so well connected on every front that they outpace us in every advance. They will be teaching us how to make this transition seem effortless.

Can we do this? Can we screw our courage up and take a leap into a great unknown, into an educational future which draws from our past, but is not bound to it? With parents and politicians crying out for metrics and endless assessments, we are losing the space to experiment, to play, to explore. Next year, the National Curriculum will land like a ton of bricks, even as it presents the opportunity for a Great Escape. The next twelve months will be crucial. If we can only change the way we think about what is possible, we will change what is possible. It’s a big ask. It’s the challenge of our times. Will we rise to meet it? Can we make an agreement to share what we know and what we do? That’s all it takes. So simple and so profound.

Slides for this talk are available here.

Posted in Uncategorized | 1 Reply

Digital Citizenship

Introduction: Out of Control

A spectre is haunting the classroom, the spectre of change. Nearly a century of institutional forms, initiated at the height of the Industrial Era, will change irrevocably over the next decade. The change is already well underway, but this change is not being led by teachers, administrators, parents or politicians. Coming from the ground up, the true agents of change are the students within the educational system. Within just the last five years, both power and control have swung so quickly and so completely in their favor that it’s all any of us can do to keep up. We live in an interregnum, between the shift in power and its full actualization: These wacky kids don’t yet realize how powerful they are.

This power shift does not have a single cause, nor could it be thwarted through any single change, to set the clock back to a more stable time. Instead, we are all participating in a broadly-based cultural transformation. The forces unleashed can not simply be dammed up; thus far they have swept aside every attempt to contain them. While some may be content to sit on the sidelines and wait until this cultural reorganization plays itself out, as educators you have no such luxury. Everything hits you first, and with full force. You are embedded within this change, as much so as this generation of students.

This paper outlines the basic features of this new world we are hurtling towards, pointing out the obvious rocks and shoals that we must avoid being thrown up against, collisions which could dash us to bits. It is a world where even the illusion of control has been torn away from us. A world wherein the first thing we need to recognize that what is called for in the classroom is a strategic détente, a détente based on mutual interest and respect. Without those two core qualities we have nothing, and chaos will drown all our hopes for worthwhile outcomes. These outcomes are not hard to achieve; one might say that any classroom which lacks mutual respect and interest is inevitably doomed to failure, no matter what the tenor of the times. But just now, in this time, it happens altogether more quickly.

Hence I come to the title of this talk, “Digital Citizenship”. We have given our children the Bomb, and they can – if they so choose – use it to wipe out life as we know it. Right now we sit uneasily in an era of mutually-assured destruction, all the more dangerous because these kids don’t now how fully empowered they are. They could pull the pin by accident. For this reason we must understand them, study them intently, like anthropologists doing field research with an undiscovered tribe. They are not the same as us. Unwittingly, we have changed the rules of the world for them. When the Superpowers stared each other down during the Cold War, each was comforted by the fact that each knew the other had essentially the same hopes and concerns underneath the patina of Capitalism or Communism. This time around, in this Cold War, we stare into eyes so alien they could be another form of life entirely. And this, I must repeat, is entirely our own doing. We have created the cultural preconditions for this Balance of Terror. It is up to us to create an environment that fosters respect, trust, and a new balance of powers. To do that first we must examine the nature of the tremendous changes which have fundamentally altered the way children think.

I: Primary Influences

I am a constructivist. Constructivism states (in terms that now seem fairly obvious) that children learn the rules of the world from their repeated interactions within in. Children build schema, which are then put to the test through experiment; if these experiments succeed, those schema are incorporated into ever-larger schema, but if they fail, it’s back to the drawing board to create new schema. This all seems straightforward enough – even though Einstein pronounced it, “An idea so simple only a genius could have thought of it.” That genius, Jean Piaget, remains an overarching influence across the entire field of childhood development.

At the end of the last decade I became intensely aware that the rapid technological transformations of the past generation must necessarily impact upon the world views of children. At just the time my ideas were gestating, I was privileged to attend a presentation given by Sherry Turkle, a professor at the Massachusetts Institute of Technology, and perhaps the most subtle thinker in the area of children and technology. Turkle talked about her current research, which involved a recently-released and fantastically popular children’s toy, the Furby.

For those of you who may have missed the craze, the Furby is an animatronic creature which has expressive eyes, touch sensors, and a modest capability with language. When first powered up, the Furby speaks ‘Furbish’, an artificial language which the child can decode by looking up words in a dictionary booklet included in the package. As the child interacts with the toy, the Furby’s language slowly adopts more and more English prhases. All of this is interesting enough, but more interesting, by far, is that the Furby has needs. Furby must be fed and played with. Furby must rest and sleep after a session of play. All of this gives the Furby some attributes normally associated with living things, and this gave Turkle an idea.

Constructivists had already determined that between ages four and six children learn to differentiate between animate objects, such as a pet dog, and inanimate objects, such as a doll. Since Furby showed qualities which placed it into both ontological categories, Turkle wondered whether children would class it as animate or inanimate. What she discovered during her interviews with these children astounded her. When the question was put to them of whether the Furby was animate or inanimate, the children said, “Neither.” The children intuited that the Furby resided in a new ontological class of objects, between the animate and inanimate. It’s exactly this ontological in-between-ness of Furby which causes some adults to find them “creepy”. We don’t have a convenient slot to place them into our own world views, and therefore reject them as alien. But Furby was completely natural to these children. Even the invention of a new ontological class of being-ness didn’t strain their understanding. It was, to them, simply the way the world works.

Writ large, the Furby tells the story of our entire civilization. We make much of the difference between “digital immigrants”, such as ourselves, and “digital natives”, such as these children. These kids are entirely comfortable within the digital world, having never known anything else. We casually assume that this difference is merely a quantitative facility. In fact, the difference is almost entirely qualitative. The schema upon which their world-views are based, the literal ‘rules of their world’, are completely different. Furby has an interiority hitherto only ascribed to living things, and while it may not make the full measure of a living thing, it is nevertheless somewhere on a spectrum that simply did not exist a generation ago. It is a magical object, sprinkled with the pixie dust of interactivity, come partially to life, and closer to a real-world Pinocchio than we adults would care to acknowledge.

If Furby were the only example of this transformation of the material world, we would be able to easily cope with the changes in the way children think. It was, instead, part of a leading edge of a breadth of transformation. For example, when I was growing up, LEGO bricks were simple, inanimate objects which could be assembled in an infinite arrangement of forms. Today, LEGO Mindstorms allow children to create programmable forms, using wheels and gears and belts and motors and sensors. LEGO is no longer passive, but active and capable of interacting with the child. It, too, has acquired an interiority which teaches children that at some essential level the entire material world is poised at the threshold of a transformation into the active. A child playing with LEGO Mindstorms will never see the material world as wholly inanimate; they will see it as a playground requiring little more than a few simple mechanical additions, plus a sprinkling of code, to bring it to life. Furby adds interiority to the inanimate world, but LEGO Mindstorms empowers the child with the ability to add this interiority themselves.

The most significant of these transformational innovations is one of the most recent. In 2004, Google purchased Keyhole, Inc., a company that specialized in geospatial data visualization tools. A year later Google released the first version of Google Earth, a tool which provides a desktop environment wherein the entire Earth’s surface can be browsed, at varying levels of resolution, from high Earth orbit, down to the level of storefronts, anywhere throughout the world. This tool, both free and flexible, has fomented a revolution in the teaching of geography, history and political science. No longer constrained to the archaic Mercator Projection atlas on the wall, or the static globe-as-a-ball perched on one corner of teacher’s desk, Google Earth presents Earth-as-a-snapshot.

We must step back and ask ourselves the qualitative lesson, the constructivist message of Google Earth. Certainly it removes the problem of scale; the child can see the world from any point of view, even multiple points of view simultaneously. But it also teaches them that ‘to observe is to understand’. A child can view the ever-expanding drying of southern Australia along with a data showing the rise in temperature over the past decade, all laid out across the continent. The Earth becomes a chalkboard, a spreadsheet, a presentation medium, where the thorny problems of global civilization and its discontents can be explored out in exquisite detail. In this sense, no problem, no matter how vast, no matter how global, will be seen as being beyond the reach of these children. They’ll learn this – not because of what teacher says, or what homework assignments they complete – through interaction with the technology itself.

The generation of children raised on Google Earth will graduate from secondary schools in 2017, just at the time the Government plans to complete its rollout of the National Broadband Network. I reckon these two tools will go hand-in-hand: broadband connects the home to the world, while Google Earth brings the world into the home. Australians, particularly beset by the problems of global warming, climate, and environmental management, need the best tools and the best minds to solve the problems which already beset us. Fortunately it looks as though we are training a generation for leadership, using the tools already at hand.

The existence of Google Earth as an interactive object changes the child’s relationship to the planet. A simulation of Earth is a profoundly new thing, and naturally is generating new ontological categories. Yet again, and completely by accident, we have profoundly altered the world view of this generation of children and young adults. We are doing this to ourselves: our industries turn out products and toys and games which apply the latest technological developments in a dazzling variety of ways. We give these objects to our children, more or less blindly unaware of how this will affect their development. Then we wonder how these aliens arrived in our midst, these ‘digital natives’ with their curious ways. Ladies and gentlemen, we need to admit that we have done this to ourselves. We and our technological-materialist culture have fostered an environment of such tremendous novelty and variety that we have changed the equations of childhood.

Yet these technologies are only the tip of the iceberg. Each are the technologies of childhood, of a world of objects, where the relationship is between child and object. This is not the world of adults, where the relations between objects are thoroughly confused by the relationships between adults. In fact, it can be said that for as much as adults are obsessed with material possessions, we are only obsessed with them because of our relationships to other adults. The corner we turn between childhood and young adulthood is indicative of a change in the way we think, in the objects of attention, and in the technologies which facilitate and amplify that attention. These technologies have also suddenly and profoundly changed, and, again, we are almost completely unaware of what that has done to those wacky kids.

II: Share This Thought!

Australia now has more mobile phone subscribers than people. We have reached 104% subscription levels, simply because some of us own and use more than one handset. This phenomenon has been repeated globally; there are something like four billion mobile phone subscribers throughout the world, representing approximately three point six billion customers. That’s well over half the population of planet Earth. Given that there are only about a billion people in the ‘advanced’ economies in the developed world – almost all of whom now use mobiles – two and a half billion of the relatively ‘poor’ also have mobiles. How could this be? Shouldn’t these people be spending money on food, housing, and education for their children?

As it turns out (and there are numerous examples to support this) a mobile handset is probably the most important tool someone can employ to improve their economic well-being. A farmer can call ahead to markets to find out which is paying the best price for his crop; the same goes for fishermen. Tradesmen can close deals without the hassle and lost time involved in travel; craftswomen can coordinate their creative resources with a few text messages. Each of these examples can be found in any Bangladeshi city or Africa village. In the developed world, the mobile was nice but non-essential: no one is late anymore, just delayed, because we can always phone ahead. In the parts of the world which never had wired communications, the leap into the network has been explosively potent.

The mobile is a social accelerant; it does for our innate social capabilities what the steam shovel did for our mechanical capabilities two hundred years ago. The mobile extends our social reach, and deepens our social connectivity. Nowhere is this more noticeable than in the lives of those wacky kids. At the beginning of this decade, researcher Mitzuko Ito took a look at the mobile phone in the lives of Japanese teenagers. Ito published her research in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, presenting a surprising result: these teenagers were sending and receiving a hundred text messages a day among a close-knit group of friends (generally four or five others), starting when they first arose in the morning, and going on until they fell asleep at night. This constant, gentle connectivity – which Ito named ‘co-presence’ – often consisted of little of substance, just reminders of connection.

At the time many of Ito’s readers dismissed this phenomenon as something to be found among those ‘wacky Japanese’, with their technophilic bent. A decade later this co-presence is the standard behavior for all teenagers everywhere in the developed world. An Australian teenager thinks nothing of sending and receiving a hundred text messages a day, within their own close group of friends. A parent who might dare to look at the message log on a teenager’s phone would see very little of significance and wonder why these messages needed to be sent at all. But the content doesn’t matter: connection is the significant factor.

We now know that the teenage years are when the brain ‘boots’ into its full social awareness, when children leave childhood behind to become fully participating members within the richness of human society. This process has always been painful and awkward, but just now, with the addition of the social accelerant and amplifier of the mobile, it has become almost impossibly significant. The co-present social network can help cushion the blow of rejection, or it can impel the teenager to greater acts of folly. Both sides of the technology-as-amplifier are ever-present. We have seen bullying by mobile and over YouTube or Facebook; we know how quickly the technology can overrun any of the natural instincts which might prevent us from causing damage far beyond our intention – keep this in mind, because we’ll come back to it when we discuss digital citizenship in detail.

There is another side to sociability, both far removed from this bullying behavior and intimately related to it – the desire to share. The sharing of information is an innate human behavior: since we learned to speak we’ve been talking to each other, warning each other of dangers, informing each other of opportunities, positing possibilities, and just generally reassuring each other with the sound of our voices. We’ve now extended that four-billion-fold, so that half of humanity is directly connected, one to another.

We know we say little to nothing with those we know well, though we may say it continuously. What do we say to those we know not at all? In this case we share not words but the artifacts of culture. We share a song, or a video clip, or a link, or a photograph. Each of these are just as important as words spoken, but each of these places us at a comfortable distance within the intimate act of sharing. 21st-century culture looks like a gigantic act of sharing. We share music, movies and television programmes, driving the creative industries to distraction – particularly with the younger generation, who see no need to pay for any cultural product. We share information and knowledge, creating a wealth of blogs, and resources such as Wikipedia, the universal repository of factual information about the world as it is. We share the minutiae of our lives in micro-blogging services such as Twitter, and find that, being so well connected, we can also harvest the knowledge of our networks to become ever-better informed, and ever more effective individuals. We can translate that effectiveness into action, and become potent forces for change.

Everything we do, both within and outside the classroom, must be seen through this prism of sharing. Teenagers log onto video chat services such as Skype, and do their homework together, at a distance, sharing and comparing their results. Parents offer up their kindergartener’s presentations to other parents through Twitter – and those parents respond to the offer. All of this both amplifies and undermines the classroom. The classroom has not dealt with the phenomenal transformation in the connectivity of the broader culture, and is in danger of becoming obsolesced by it.

Yet if the classroom were to wholeheartedly to embrace connectivity, what would become of it? Would it simply dissolve into a chaotic sea, or is it strong enough to chart its own course in this new world? This same question confronts every institution, of every size. It affects the classroom first simply because the networked and co-present polity of hyperconnected teenagers has reached it first. It is the first institution that must transform because the young adults who are its reason for being are the agents of that transformation. There’s no way around it, no way to set the clock back to a simpler time, unless, Amish-like, we were simply to dispose of all the gadgets which we have adopted as essential elements in our lifestyle.

This, then, is why these children hold the future of the classroom-as-institution in their hands, this is why the power-shift has been so sudden and so complete. This is why digital citizenship isn’t simply an academic interest, but a clear and present problem which must be addressed, broadly and immediately, throughout our entire educational system. We already live in a time of disconnect, where the classroom has stopped reflecting the world outside its walls. The classroom is born of an industrial mode of thinking, where hierarchy and reproducibility were the order of the day. The world outside those walls is networked and highly heterogeneous. And where the classroom touches the world outside, sparks fly; the classroom can’t handle the currents generated by the culture of connectivity and sharing. This can not go on.

When discussing digital citizenship, we must first look to ourselves. This is more than a question of learning the language and tools of the digital era, we must take the life-skills we have already gained outside the classroom and bring them within. But beyond this, we must relentlessly apply network logic to the work of our own lives. If that work is as educators, so be it. We must accept the reality of the 21st century, that, more than anything else, this is the networked era, and that this network has gifted us with new capabilities even as it presents us with new dangers. Both gifts and dangers are issues of potency; the network has made us incredibly powerful. The network is smarter, faster and more agile than the hierarchy; when the two collide – as they’re bound to, with increasing frequency – the network always wins. A text message can unleash revolution, or land a teenager in jail on charges of peddling child pornography, or spark a riot on a Sydney beach; Wikipedia can drive Britannica, a quarter millennium-old reference text out of business; a outsider candidate can get himself elected president of the United States because his team masters the logic of the network. In truth, we already live in the age of digital citizenship, but so many of us don’t know the rules, and hence, are poor citizens.

Now that we’ve explored the dimensions of the transition in the understanding of the younger generation, and the desynchronization of our own practice within the world as it exists, we can finally tackle the issue of digital citizenship. Children and young adults who have grown up in this brave new world, who have already created new ontological categories to frame it in their understanding, won’t have time or attention for preaching and screeching from the pulpit in the classroom, or the ‘bully pulpits’ of the media. In some ways, their understanding already surpasses ours, but their apprehension of consequential behavior does not. It is entirely up to us to bridge this gap in their understanding, but I do not to imply that educators can handle this task alone. All of the adult forces of the culture must be involved: parents, caretakers, educators, administrators, mentors, authority and institutional figures of all kinds. We must all be pulling in the same direction, lest the threads we are trying to weave together unravel.

III: 20/60 Foresight

While on a lecture tour last year, a Queensland teacher said something quite profound to me. “Giving a year 7 student a laptop is the equivalent of giving them a loaded gun.” Just as we wouldn’t think of giving this child a gun without extensive safety instruction, we can’t even think consider giving this child a computer – and access to the network – without extensive training in digital citizenship. But the laptop is only one device; any networked device has the potential for the same pitfalls.

Long before Sherry Turkle explored Furby’s effect on the world-view of children, she examined how children interact with computers. In her first survey, The Second Self: Computers and the Human Spirit, she applied Lacanian psychoanalysis and constructivism to build a model of how children interacted with computers. In the earliest days of the personal computer revolution, these machines were not connected to any networks, but were instead laboratories where the child could explore themselves, creating a ‘mirror’ of their own understanding.

Now that almost every computer is fully connected to the billion-plus regular users of the Internet, the mirror no longer reflects the self, but the collective yet highly heterogeneous tastes and behaviors of mankind. The opportunity for quiet self-exploration drowns amidst the clamor from a very vital human world. In the space between the singular and the collective, we must provide an opportunity for children to grow into a sense of themselves, their capabilities, and their responsibilities. This liminal moment is the space for an education in digital citizenship. It may be the only space available for such an education, before the lure of the network sets behavioral patterns in place.

Children must be raised to have a healthy respect for the network from their earliest awareness of it. The network access of young children is generally closely supervised, but, as they turn the corner into tweenage and secondary education, we need to provide another level of support, which fully briefs these rapidly maturing children on the dangers, pitfalls, opportunities and strengths of network culture. They already know how to do things, but they do not have the wisdom to decide when it appropriate to do them, and when it is appropriate to refrain. That wisdom is the core of what must be passed along. But wisdom is hard to transmit in words; it must flow from actions and lessons learned. Is it possible to develop a lesson plan which imparts the lessons of digital citizenship? Can we teach these children to tame their new powers?

Before a child is given their own mobile – something that happens around age 12 here in Australia, though that is slowly dropping – they must learn the right way to use it. Not the perfunctory ‘this is not a toy’ talk they might receive from a parent, but a more subtle and profound exploration of what it means to be directly connected to half of humanity, and how, should that connectivity go awry, it could seriously affect someone’s life – possibly even their own. Yes, the younger generation has different values where the privacy of personal information is concerned, but even they have limits they want to respect, and circles of intimacy they want to defend. Showing them how to reinforce their privacy with technology is a good place to start in any discussion of digital citizenship.

Similarly, before a child is given a computer – either at home or in school – it must be accompanied by instruction in the power of the network. A child may have a natural facility with the network without having any sense of the power of the network as an amplifier of capability. It’s that disconnect which digital citizenship must bridge.

It’s not my role to be prescriptive. I’m not going to tell you to do this or that particular thing, or outline a five-step plan to ensure that the next generation avoid ruining their lives as they come online. This is a collective problem which calls for a collective solution. Fortunately, we live in an era of collective technology. It is possible for all of us to come together and collaborate on solutions to this problem. Digital citizenship is a issue which has global reach; the UK and the US are both confronting similar issues, and both, like Australia, fail to deal with them comprehensively. Perhaps the Australian College of Educators can act as a spearhead on this issue, working in concert with other national bodies to develop a program and curriculum in digital citizenship. It would be a project worthy of your next fifty years.

In closing, let’s cast our eyes forward fifty years, to 2060, when your organization will be celebrating its hundredth anniversary. We can only imagine the technological advances of the next fifty years in the fuzziest of terms. You need only cast yourselves back fifty years to understand why. Back then, a computer as powerful as my laptop wouldn’t have filled a single building – or even a single city block. It very likely would have filled a small city, requiring its own power plant. If we have come so far in fifty years, judging where we’ll be in fifty years time is beyond the capabilities of even the most able futurist. We can only say that computers will become pervasive and nearly invisibly woven through the fabric of human culture.

Let us instead focus on how we will use technology in fifty years’ time. We can already see the shape of the future in one outstanding example – a website known as RateMyProfessors.com. Here, in a database of nine million reviews of one million teachers, lecturers and professors, students can learn which instructors bore, which grade easily, which excite the mind, and so forth. This simple site – which grew out of the power of sharing – has radically changed the balance of power on university campuses throughout the US and the UK. Students can learn from others’ mistakes or triumphs, and can repeat them. Universities, which might try to corral students into lectures with instructors who might not be exemplars of their profession, find themselves unable to fill those courses. Worse yet, bidding wars have broken out between universities seeking to fill their ranks with the instructors who receive the highest rankings.

Alongside the rise of RateMyProfessors.com, there has been an exponential increase in the amount of lecture material you can find online, whether on YouTube, or iTunes University, or any number of dedicated websites. Those lectures also have ratings, so it is already possible for a student to get to the best and most popular lectures on any subject, be it calculus or Mandarin or the medieval history of Europe.

Both of these trends are accelerating because both are backed by the power of sharing, the engine driving all of this. As we move further into the future, we’ll see the students gradually take control of the scheduling functions of the university (and probably in a large number of secondary school classes). These students will pair lecturers with courses using software to coordinate both. More and more, the educational institution will be reduced to a layer of software sitting between the student, the mentor-instructor and the courseware. As the university dissolves in the universal solvent of the network, the capacity to use the network for education increases geometrically; education will be available everywhere the network reaches. It already reaches half of humanity; in a few years it will cover three-quarters of the population of the planet. Certainly by 2060 network access will be thought of as a human right, much like food and clean water.

In 2060, Australian College of Educators may be more of an ‘Invisible College’ than anything based in rude physicality. Educators will continue to collaborate, but without much of the physical infrastructure we currently associate with educational institutions. Classrooms will self-organize and disperse organically, driven by need, proximity, or interest, and the best instructors will find themselves constantly in demand. Life-long learning will no longer be a catch-phrase, but a reality for the billions of individuals all focusing on improving their effectiveness within an ever-more-competitive global market for talent. (The same techniques employed by RateMyProfessors.com will impact all the other professions, eventually.)

There you have it. The human future is both more chaotic and more potent than we can easily imagine, even if we have examples in our present which point the way to where we are going. And if this future sounds far away, keep this in mind: today’s year 10 student will be retiring in 2060. This is their world.

Inflection Points

I: The Universal Solvent

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

Crowdsource Yourself

I: Ruby Anniversary

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

I’ll jump to the chase: that roomful of academics at the Fall Joint Computer Conference saw the first broadly networked system featuring raster displays – the forerunner of all displays in use today; windowing; manipulation of on-screen information using a mouse; document storage and manipulation using the first hypertext system ever demonstrated, and videoconferencing between Engelbart, demoing in San Francisco, and his colleagues 30 miles away in Menlo Park.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!