Hyperconnected Education

I: Connect / Disconnect

Recently, I had the opportunity to teach a lecture at the University of Sydney.  I always consider teaching a two-way street: there’s an opportunity to learn as much from my students as they learn from me.  Sometime I simply watch what they do, learning from that what new behaviors are spreading across the culture.  Other times I take advantage of a captive audience to run an ethnographic survey.  With eighty eager students ready to do my bidding, I worked up a few questions about mobile usage patterns within their age group.

First, I ascertained their ages – ranging mostly between eighteen and twenty-two, with a cluster around nineteen and twenty.  Then I asked them, “How old were you when you first got a mobile of your own?”  One student got her first mobile at nine years of age, while the oldest waited until they were nineteen.  An analysis of the data shows that half of the students owned a mobile at around eleven and a half years old.

When I shared this result with some colleagues on Twitter, they responded, “That seems a bit old.”  And it does – precisely because these students are, on average, eight and a half years older than when they got their first mobile.  This survey looks back into 2003 – the year that I arrived in Australia – rather than at the present moment.

Another survey, conducted last year, shows how much has changed, so quickly.  Thirty-seven percent of children between Kindergarten and Year 2 have their own mobile (of some sort), with one fifth having access to a smartphone.  By Year 8, that figure has risen to eighty-five percent, with fully one-third using smartphones.

Since the introduction of the mobile, thirty years ago, the average age of first ownership has steadily dropped.  For many years the device was simply too expensive to be given to any except children from the wealthiest families.  Today, an Android smartphone can be purchased outright for little more than a hundred dollars, and thirty dollars a months in carriage.  With the exception of the poorest Australians, price is no longer a barrier to mobile ownership.  As the price barrier dropped, the age of first mobile ownership has also tumbled, from eleven years old in 2003, to something closer to eight today.

The resistance to mobile ownership in the sub-eight-year-old set will only be overcome as the devices themselves become more appropriate to children with less developed cognitive skills.  Below age eight, the mobile morphs from a general-purpose communications device to a type of networked tether, allowing the parent and the child to remain in a constant high state of situational awareness about each other’s comings and goings.  Only a few mobiles have been designed to serve the needs of the young child.  The market has not been mature enough to support a broad array of child-friendly devices, nor have the carriers developed plans which make mobile ownership in that age group an attractive option.  This will inevitably happen, and from the statistics, that day can not be very far off: the resistance to the mobile in this age group will be designed away.

There is no real end in sight.  The younger the child, the more the mobile assumes the role of the benevolent watcher, a sensor continually reporting the condition of the child to the parent.  We already use radio-frequency baby monitors to listen to our children as they fuss in their cribs; a mobile provides the same capability by different means.  This sensor will also track the child’s heartbeat, temperature, and other vital statistics, will grow smaller and less-power hungry, until – at some point in the next fifteen years, a child have receive their first mobile moments after they pop out of the womb.  That mobile will be integrated into the hospital tag slipped around their foot.

It is an absolute inevitability that sometime within the next decade, every single child entering primary school will come bearing their own mobile.  They will join the rest of us in being hyperconnected – directly and immediately connected to everyone of importance to them.  Why should Australian children be any different than any of the rest of us?  Mobile subscription rates in Australia exceed 120% – more than one per person, even counting all those currently too young or too old to use a mobile.  Within a generation, being human and being connected will be seen to be synonymous.

The next years are an interregnum, the few heartbeats between the ‘before time’ – when none of us were connected – and a thoroughly hyperconnected afterward.  This is the moment when we must make the necessary pedagogical and institutional adjustments to a pervasively connected culture.  That survey from last year found that even at Kindergarten level, two-thirds of parents were willing to buy a mobile for their children – if schools integrated the device into their pedagogy.  But the survey also pointed to opposition within the schools themselves:

“When we asked administrators about the likelihood of them allowing their students to use their own mobile devices for instructional purposes at school this year, a resounding 65% of principals said “no way!”

School administrators overwhelmingly hold the comforting belief that the transition into hyperconnectivity can be prevented, forestalled, or simply banned.  A decade ago most schools banned the mobile; within the last few years, mobiles have been permitted with specific restrictions around how and when they can be used.  A few years from now, there will be no effective way to silence the mobile, anywhere (except in specific instances I will speak to later), because so much of our children’s lives will have become contingent upon the continuous connection it affords.

Like King Canute, we can not hold back the tide.  We must prepare for the rising waters.  We must learn to swim within the sea of connectivity.  Or we will drown.

II: Share / Overshare

When people connect, they begin to share.  This happens automatically, an expression of the instinctive human desire to communicate matters of importance.  Give someone an open channel and they’ll transmit everything they see that they think could be of any interest to anyone else.  At the beginning, this sharing can look quite unfocused – bad jokes and cute kittens – but as time passes, we teach one another those things we consider important enough to share, by sharing them.  Sharing, driven by need, amplified by technology, reaches every one of us, through our network of connections.  We both give and receive: from each according to their knowledge, to each, according to their need.

Sharing has amplified the scope of our awareness.  We can find and connect to others who share our interests, increasing our awareness of those interests.  The parent-child bond is the most essential of all our interests, so parents are loading their children up with the technologies of connection, gaining a constant ‘situational awareness’ of a depth which makes them the envy of ASIO.  The mobile tether becomes eyes and ears and capability, both lifeline and surrogate.  The child uses the mobile to share experiences – both actively and passively – and the parent, wherever they may be, ‘hovers’, watching and guiding.

This ‘helicopter parenting’ was difficult to put into practice before hyperconnectivity, because vigilance required presence.  The mobile has cut the cord, allowing parental hypervigilance to become a pervasive feature of the educational environment.  As the techniques for this electronic hypervigilance become less expensive and easier to use, they will become the accepted practice for child raising.

Intel Fellow and anthropologist Dr. Genevieve Bell spent a day in a South Korean classroom a few years ago, interviewing children whose parents had given them mobiles with GPS tracking capabilities – so those parents always knew the precise location of their child.  When Bell asked the students if they found this constant monitoring threatening, one set of students pointed to another student, who didn’t have a tracking mobile, saying,  “Her parents don’t love her enough to care where she is.”  In the context of the parent-child bond, something that appears Orwellian transforms into the ultimate security blanket.

A friend in Sydney has a child in Kindergarten, a precocious boy who finds the classroom environment alternately boring and confronting.  She’s been called in to speak with the teacher a few times, because of his disruptive behavior – behavior he links to bullying by another classmate.  The teacher hasn’t seen the behaviour, or perhaps thinks it doesn’t merit her attention, leaving the boy increasingly frustrated, dreading every day at school.

In conversation with my friend, I realized that her child felt alone and undefended in the classroom.  How might that change?  Imagine that before he left for school, his mother affixes a small button to his school uniform, perhaps on the collar.  This button would have a camera, microphone and mobile transmitter within it, continuously recording and transmitting a live stream directly from the child’s collar to the parent’s mobile – all day long.  The child wouldn’t have to set it up, or do anything at all.  It would simply work – and my friend would have eyes and ears wherever her child went.  If there was trouble – bullying, or anything else – my friend would see it as it happened, and would be able to send a recording along to her son’s teacher.

This is not science fiction.  It is not even far way.  Every smartphone has all of the technology needed to make this happen.  Although a bit bulkier than I’ve described, it could all be done today.  Not long ago, I purchased a $50 toy ‘spy watch’, which records 20 minutes of video.  My friend could equip her son with that toy asking him to record anything he thought important.  Shrinking it down to the size of a button and adding mobile capability will come in time.  When such a device hits the market, parents will find it irresistible – because it finally gives them eyes in the back of their head.

We need to ask ourselves whether this technological tethering is good for either parent or child.  Psychologist Sherry Turkel, who has explored the topic of children and technology longer than anyone else, believes that this constant close connectivity keeps the child from exploring their own boundaries, artificially extending the helplessness of childhood by amplifying the connection between parent and child.  Connection has consequence: to be connected is to be affected by that connection.  A small child might gain a sense of freedom with an electronic tether, but an adolescent might have a dependency on that connection that could interfere with their adult development.  Because hyperconnectivity is such a recent condition, we don’t have the answers to these questions.  But these questions need to be asked.

This connection has broad consequences for educators.  Two years ago I heard a teacher in Victoria relate the following story: In a secondary school classroom, one student had failed to turn in their assignment.  This wasn’t the first time it had happened, so the teacher had a bit of a go at the student.  As the teacher harangued the student, he reached into his knapsack, pulled out his mobile, and punched a few buttons.  When the connection was made, he said, “You listen to the bitch,” and held the phone away from his face, toward his teacher.

Connection and sharing rewire and short-circuit the relationships we have grown accustomed to within the classroom.  How can a teacher maintain discipline while constantly being trumped by a child tethered to a hypervigilant parent?  How can a child gain independence while so tethered?  How can a parent gain any peace of mind while constantly monitoring the activities of their child?  All of these new dynamics are persistent features of the 21st-century classroom.  All are already present, but none are as yet pervasive.  We have some time to think about our response to hyperconnectivity.

III: Learn / You, Me, and Everyone We Know

A few years ago, both Stanford University and the Massachusetts Institute of Technology (my alma mater) made the revolutionary decision to publicly post all of lesson plans, homework assignments, and recordings of lectures for all classes offered at their schools.  Why, some wondered aloud, would anyone pay the $40,000 a year in tuition and fees, when you could get the content for nothing?  This question betrays a fundamental misunderstanding of education in the 21st century: knowledge is freely available, but the mentoring which makes that knowledge apprehensible and useful remains a precious resource.  It’s wonderful to have a lesson plan, but it’s essential to be able to work with someone who has a deep understanding of the material.

This is the magic in education, the je ne sais quois that makes it a profoundly human experience, and stubbornly resistant to automation.  We have no shortage of material: nearly four million English language articles in Wikipedia, 2500 videos on Khan Academy (started, it should be noted, by an MIT educator), tens of thousands of lessons in everything from cooking to knitting to gardening to home renovation on YouTube, and sites like SkillShare, which connect those who have specialist knowledge to those who want it.  Yet, even with this embarrassment of riches, we still yearn for the opportunity to conspire, to breathe the same air as our mentor, while they, by the fact of their presence, transmit mastery.  If this sounds a wee bit mystical, so be it: education is the most human of all our behaviors, and we do not wholly understand the why and how of it.

Who shall educate the educators?  All of the materials so far created have been affordances for students, to make their lives easier.  If it helps an educator, that’s a nice side benefit, but never the main game.  In this sense, nearly all online educational resources are profoundly populist, pointing directly to the student, ignoring both educators and educational institutions.  Hyperconnectivity has removed all of the friction which once made it difficult to connect directly to students, but has thus far ignored the teacher.  In the back of a classroom, students can tap on a mobile and correct the errors in a teacher’s lecture, but can the teacher get peer review of that same material?  Theoretically, it should be easy.  In practice, we’re still waiting.

I recently had the good fortune to be a judge at Sydney Startup Weekend, where technology entrepreneurs pitch their ideas, then spend 48 frenetic hours bringing them to life.  The winning project, ClassMate, directly addresses the Educator-to-Educator (E2E) connection.  Providing a platform for a teacher to upload and store their lesson plans, ClassMate allows teachers share those lesson plans within a school, a school system, or as broadly as desired – even charge for them.

This kind of sharing gives every teacher access to a wealth of lesson plans on a broad variety of topics.  As the National Curriculum rolls out over the next few years, the taxonomy of subject areas within it can act as an organizing taxonomy for the sharing of those lesson plans.  Searching through thousands of lesson plans would not simply be a splayed view based on keywords, like a Google search, but rather something more highly specific and focused, drawn from the arc of the National Curriculum.  With the National Curriculum as an organizing principle, the best lesson plans for any particular node within the Curriculum will quickly rise to the top.

This means that every teacher in Australia (and the world) will soon have access to the best class materials from the best teachers throughout Australia.  Teachers will be able to spend more time interacting with students as the hard slog of creating lecture materials becomes a shared burden.  Yet teachers are no different from their students; the best lesson plan is in itself insufficient.  Teachers need to work with other teachers, need to be mentored by other teachers, need to conspire with other teachers, in order to make their own pedagogical skills commensurate with the resources on offer to them.  Professional development must go hand-in-hand with an accelerated sharing of knowledge, lest this sharing only amplify the imbalances between classrooms and between schools.

Victoria’s Department of Education and Early Childhood Development has the ULTRANET, designed to facilitate the sharing of materials between teachers, students and parents.  ULTRANET is not particularly user-friendly, presenting a barrier to its widespread acceptance by a community of educators who may not be broadly comfortable with technology.  Educational sharing systems must be designed from the perspective of those who use them – teachers and students – and not from a set of KPIs on a tick sheet.  One reason why I have high hopes for ClassMate is that the designer is himself a primary school teacher in New South Wales, solving a problem he faces every day.

Sharing between educators creates a platform for a broader sharing between students and educators.  At present almost all of that sharing happens inside the classroom and is immediately lost.  We need to think about how to capture our sharing moments, making them available to students.  Consider the recording device I mentioned earlier – although it works nicely for a child in Kindergarden, it becomes even more useful for someone preparing for an HSC/VCE exam, giving them a record of the mentoring they received.  This too can be shared broadly, where relevant (and often where it isn’t relevant at all, but funny, or silly, or sad, or what have you), so that everything is captured, everywhere, and shared with everyone.

If this sounds a bit like living in a fishbowl, I can only recommend that you get used to it.  Educators will be hit particularly hard by hyperconnectivity, because they spend their days working with students who have never known anything else.  Students copy from one another, teachers borrow from teachers, administrations and departments imitate what they’ve seen working in other schools and other states.  This is how it has always been, but now that this is no longer difficult, it is accelerating wildly, transforming the idea of the classroom.

IV: All Together Now

Let’s now turn to the curious case of David Cecil, a 25 year-old unemployed truckie from Cowra, arrested by the Australian Federal Police on a series of charges which could see him spend a decade imprisoned.  With nothing but time on his hands, and the Internet at his fingertips, Cecil found the thing he found most interesting, found the others interested in it, and listened to what they said.  The Internet became a classroom, and the people he connected to became his mentors.  Dedicated to learning, online for as much as twenty hours a day, Cecil took himself from absolute neophyte to practiced expert in just a few months, an autodidactic feat we can all admire in theory, if somewhat less in practice.  On the 27th of July, Cecil was arrested during a dawn AFP raid on his home, charged with breaking into and obtaining control over the computer systems of Internet service provider Platform Networks.

Cecil might have gotten away with it, but his ego wot dun him.  Cecil went back to the same bulletin boards and chat sites where he learned his ‘1337 skills’, and bragged about his exploits.  Given that these boards are monitored by the forces of law and order, it was only a matter of time before the inevitable arrest.  While it might seem the very apex of stupidity to publicly brag about breaking the law, the desire to share what we know – and be seen as an expert – frequently overrules our sense of self-preservation.  We are born to share what we know, and wired to learn from what others share.  That’s no less true for ourselves than for that ideal poster child for Constructivism, David Cecil.

Now that this figurative internal wiring has become external and literal, now that the connections no longer end with our family, friends and colleagues, but extend pervasively and continuously throughout the world, we have the capability, in principle, of learning anything known by anyone anywhere, of gaining the advantage of their experiences to guide us through our own.  We have for the first time the possibility of some sort of collective mind – not in a wacky science-fiction sense, but with something so mundane it barely rates a mention today – Wikipedia.

In its 3.7 million articles, Wikipedia offers up the factual core of human knowledge – not always perfectly, but what it loses in perfection it makes up for in ubiquity.  Every person with a smartphone now walks around with the collected essence of human knowledge in their hands, accessible within a few strokes of a fingertip.  This is unprecedented, and means that we now have the capability to make better decisions than ever before, because, at every step along the way, we can refer to this factual base, using it to guide us into doing the best we can at every moment.

That is the potential for this moment, but we do not yet operate in those terms.  We teach children to research, as if this were an activity distinct from the rest of our experience, when, in reality, research is the core activity of the 21st century.  We need to think about the era, just a few years hence, when everyone has a very smart and very well connected mobile in hand from birth.  We need to think about how that mobile becomes the lever which moves the child into knowledge.  We need to think about our practice and how it is both undermined and amplified by the device and the network it represents.

If we had to do this as individuals – or even within a single school administration – we would quickly be overwhelmed by the pace of events beyond the schoolhouse walls.  To be able to encounter this accelerating tsunami of connection and empowerment we must take the medicine ourselves, using the same tools as our students and their parents.  We have agency, but only when we face the future squarely, and admit that everything we once knew about the logic of the classroom – its flows of knowledge and power – has gone askew, and that our future lies within the network, not in opposition to it.

In ten years time, how many administrators will say “No way!”, when asked if the mobile has a place in their curriculum?  (By then, it will equivalent to asking if reading has a place in the curriculum.)  This is the stone that must be moved, the psychological block that dams connectivity and creates a dry, artificial island where there should instead be a continuous sea of capability.  The longer that dam remains in place, the more force builds up behind it.  Either we remove the stone ourselves, or the pressures of a hyperconnected world will simply rip through the classroom, wiping it away.

Your students are not alone on their journey into knowledge and mastery.  Beside them, educators blaze a new trail into a close connectivity, leveraging a depth of collective experience to accelerate the search for solutions.  We must search and research and share and learn and put that learning into practice.  We must do this continuously so we can stay in front of this transition, guiding it toward meaningful outcomes for both students and educators.  We must reinvent education while hyperconnectivity reinvents us.

CODA: Disconnect

Finally, let me also be a Devil’s Advocate.  Connectivity is amazing and wonderful and empowering, but so is its opposite.  In fifteen years we have moved from a completely disconnected culture into a completely connected culture.  We believe, a priori, that connection is good.  Yet connection comes with a cost.  To be connected is to be deeply involved with another, and outside one’s self.  This is fine – some of the time.  But we also need a space where we are wholly ourselves, contingent upon no one else.

Our children and our students do not know this.  The value of silence and quiet may seem obvious to us, but they have never lived in a disconnected culture.  They only know connection.  Being disconnected frightens them – both because of its unfamiliarity, and because it seems to hold within it the possibly of facing dangers without the assistance of others.  Furthermore, this generation has no positive role model of disconnection to look to.  They see their parents responding to text messages at the dinner table, answering emails from in front of the television, running for the mobile, every time it rings.  Parents have no boundaries around their connectivity; by their actions, this is what they have taught to their children.

Educators must instill some basic rules – a ‘hygiene of connectivity’ in the next generation.   We need to highlight disconnection as something to be longed for, a positive feature of life.  We need to teach them ways to manage their connectivity, so that they become the master of their connections, not servants.  And we need to be able to set the example in our own actions.  If we do that, we can give the next generation an important insight into how to be whole in a hyperconnected world.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.