Paperworks / Padworks

I: Paper, works

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development.  DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time.  Or, well, perhaps I overstate the matter.  But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships.  This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks.  The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process.  I said I’d be happy to do so, and asked how many proposals I’d have to review.  “I doubt it will be more than thirty or forty,” he replied.  Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions.  But the RFP didn’t result in thirty or forty proposals.  The total came to almost ninety.  All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting.  Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs.  If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit.  Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out.  That took nearly 24 hours by itself – and cost an ungodly sum.  I was left with a huge, heavy box of paper which I could barely lug back to my flat.  For the next 36 hours, this box would be my ball and chain.  I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad.  Then I looked back at the box.  Then back at the iPad.  Then back at the box.  I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop.  This, for me, would be a bit of a test.  For the last decade I’d never traveled anywhere without my laptop.  Could I manage a business trip with just my iPad?  I looked back at the iPad.  Then at the box.  You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox.  Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it.  Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own.  I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service.  My rationale was that I imagined this iPad would be a ‘cloud-centric’ device.  The ‘cloud’ is a term that’s come into use quite recently.  It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer.  Gmail is a good example of a software that’s ‘in the cloud’.  Facebook is another.  Twitter, another.   Much of what we do with our computers – iPad included – involves software accessed over the Internet.  Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud.  Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work.  Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible.  In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side.  I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one.  My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me.  Here again were the proposals, carefully ordered and placed into several large, ringed binders.  I’d be expected to tote these to the evaluation meeting.  Fortunately, that was only a few floors above my hotel room.  That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room.  I put those boxes down – and never looked at them again.  As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off.  She understood completely.  I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office.  Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth.  We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper.  Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked.  To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper.  We have it now.  After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written.  You will soon have access to every single document you might ever need, right here, right now.  We’re not 100% there yet – but that’s not the fault of the device.  We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment.  At that point, your iPad becomes the page which contains all other pages within it.  You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text.  The world is richer than that.  iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever.  It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world.  And it is every one of a hundred-million-plus websites and maybe a trillion web pages.  All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years.  It’s 2020, and we’ve had iPads for a whole decade.  The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law.  This law states that computers double in power every twenty-four months.  Ten years is five doublings, or 32 times.  That rule extends to the display as well as the computer.  The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye.  The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper.  The device itself will be thinner and lighter than the current model.  Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear.  You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user.  And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010.  Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution.   Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits.  That’s as good as a wired connection – as fast as anything promised by the National Broadband Network!  In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second.  That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today.  Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad.  (Perhaps a full copy of Wikipedia?  Or all of the books published before 1915?)  All of this still cost just $700.  If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option.  I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of?  How do we put all of that power to work?  First off, iPad will be able to see and hear in meaningful ways.  Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’.  We can already speak to our computers, and, most of the time, they can understand us.  With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it.  Your iPad will hear you, understand your voice, and follow your commands.  It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time.  They may still be employed in very specialized tasks.  For almost everything else, we will be using our iPads.  They’ll rarely leave our sides.  They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task.  When everything is so well connected, you don’t need to have personal information stored in a specific iPad.  You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible.  Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly.  People may find voice recognition more of an annoyance than an affordance.  The idea of your iPad watching you might seem creepy to some people.  But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s.  He lives in Boston while they live in Northern California.  But he needs to keep in touch, he needs to have a look in.  Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime.  It’s a bit ‘Jetsons’, when you think about it.  And that’s just what will happen next year.  By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long.  It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia.  No student, however poor, will be without their own iPad – the Government of the day will see to that.  These students of 2020 are at least as well connected as you are, as their parents are, as anyone is.  To them, iPads are not new things; they’ve always been around.  They grew up in a world where touch is the default interface.  A computer mouse, for them, seems as archaic as a manual typewriter does to us.  They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband.  They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations?  This is not the universe of ‘chalk and talk’.  This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network.  This is a world where education can be provided anywhere, on demand, as called for.  This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two.  Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away.  Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons.  Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history.  iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator.  We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding.  The more we virtualize the educational process, the more important and singular our embodied interactions become.  Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up.  Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students.  That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon.  We learn best when we learn from others.  We humans are experts in mimesis, in learning by imitation.  That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings.  We are born to work together, we are designed to learn from one another.  iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense.  It should be an amplifier, not a replacement, something that lets students go further, faster than before.  But they should not go alone.

The constant danger of technology is that it can interrupt the human moment.  We can be too busy checking our messages to see the real people right before our eyes.  This is the dilemma that will face us in the age of the iPad.  Governments will see them as cost-saving devices, something that could substitute for the human touch.  If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III:  The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband.  The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together.  But what will they be working on?  Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc.  This is certainly not the intent of the project’s creators.  Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy.  Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities.  That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity.  We know that all year nine students in Australia will be covering a particular suite of topics.  This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on.  As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it.  The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia.  All the article headings are there, all the taxonomy, all the cross references, but none of the content.  The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing.  But it isn’t.  Everyone secretly suspects the National Curriculum will ruin education.  I ask that we can see things differently.  The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value.  More than that, we need to think of every student in Australia as a contributor of value.  That’s the vital gap that must be crossed.  Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work.  Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from.  Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this.  We need to do this.  Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers.  This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy.  But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad.  We’ve never had these stars aligned in such a way before.  Only just now – in 2010 – is it possible to dream such big dreams.  It won’t even cost much money.  Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities.  We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value.  It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine.  Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra.  These kinds of things have been possible before, but the National Curriculum gives us the reason to do it.  iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy.  The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse.  In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation.  Education is one way that this happens.  People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market.  This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have.  If we can share our learning, we can close this gap.  We can bring the best of what we teach to everyone who has the need to know.

And there we are.  But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it.  The iPad is an excellent toy.  Please play with it.  I don’t mean use it.  I mean explore it.  Punch all the buttons.  Do things you shouldn’t do.  Press the big red button that says, “Don’t press me!”  Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning.  That joy is foundational to us.  If we didn’t love learning, we wouldn’t be running things around here.  We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own.  These are my favorites, but I own many others, and enjoy all of them.  There are literally tens of thousands to choose from, some of them educational, some, just for fun.  That’s the point: all work and no play makes iPad a dull toy.

So please, go and play.  As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything.  Or can, if we can change ourselves.

What Ever Happened to the Book?

For Ted Nelson

I: Centrifugal Force

We live in the age of networks.  Wherever we are, five billion of us are continuously and ubiquitously connected.  That’s everyone over the age of twelve who earns more than about two dollars a day.  The network has us all plugged into it.  Yet this is only the more recent, and more explicit network.  Networks are far older than this most modern incarnation; they are the foundation of how we think.  That’s true at the most concrete level: our nervous system is a vast neural network.  It’s also true at a more abstract level: our thinking is a network of connections and associations.  This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982.  Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts.  Like it or not, we implicitly reference other texts with every word we write.  It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts.  It’s the secret to our success.  Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth.  He never got it.  But Nelson did influence a generation of hackersSir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected.  While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure.  Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone.  Do we answer?  Do we click and follow?  A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost.  The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space.  The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser.  One of them is an article from the New York Times Magazine.  It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links.  Many of these links point back to other New York Times articles.  This article stands alone.  It is a hyperdocument, but it has not embraced the capabilities of the medium.  It has not been seduced.  It is a spinster, of sorts, confident in its purity and haughty in its isolation.  This article is hardly alone.  Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ.  We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized.  Every link presents an escape route, and a potential loss of income.  Hence, links are kept to a minimum, the losses staunched.  Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium.  The tone has been set.

On the other hand, consider an average article in Wikipedia.  It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links.  Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web.  This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage.  Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention.  Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug.  Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly.  If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site.  In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads.  In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it.  This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden.  It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents.  It is most obvious in the way we now absorb news.  Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper.  Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on.  We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way.  The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole.  Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney Morning Herald and Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded.  This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext.  We are the ones who feel the lure of the link; no machine can do that.  Newspapers made the brave decision to situate themselves as islands within a sea of hypertext.  Though they might believe themselves singular, they are not the only islands in the sea.  And we all have boats.  That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior.  With its centrifugal force, it is constantly pulling us away from wherever we are.  It also presents us with an opportunity cost.  When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment.  If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it.  Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this?  Does my need and interest outweigh all of the other demands upon my attention?  Can I focus?

In most circumstances, we will decline the challenge.  Whatever it is, it is not salient enough, not alluring enough.  It is not so much that we fear commitment as we feel the pressing weight of our other commitments.  We have other places to spend our limited attention.  This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”.  It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span.  Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days.   Instead, attention has entered an era of hypercompetitive development.  Twenty years ago only a few media clamored for our attention.  Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention.  Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text.  Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment.  More and more, our diet of text comes in these ‘bite-sized’ chunks.  Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything.  The truth is more complex.  Our diet will continue to consist of a mixture of short and long-form texts.  In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form.  It is digestible.  But it need not be vacuous.  Countless specialty blogs deliver highly-concentrated texts to audiences who need no introduction to the subject material.  They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit.  Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel:  shorter the text, the less invested you are.  You give way more easily to centrifugal force.  You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof.  Such are the dilemmas of hypertext.

II:  Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book.  The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market.  Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package.  Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world.  The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book?  If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar.  Too familiar.  This is not an electronic book.  This is ‘publishing in light’.  I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning.  An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters.  It doesn’t even necessarily begin with that translation.  Instead, first consider the text qua text.  What is it?  Who is it speaking to?  What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light.  That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated.  They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium.  This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader.  Nor does it make the electronic book an intrinsically alluring object.  That’s an interesting point to consider, because hypertext is intrinsically alluring.  The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text.  If an electronic book does not offer a new relationship to the text, then what precisely is the point?  Portability?  Ubiquity?  These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring.  This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring.  At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium.  But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe.  Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,.  For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like?  Does it differ at all from the hyperdocuments we are familiar with today?  In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text.  All of these are immediately applicable to the electronic book.  The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored.  The printed volume took nearly fifty years to evolve into its familiar hand-sized editions.  Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book.  We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been.  Over the next few years, our innovations will surprise us.  We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium.  Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument.  Yet we come to books with a sense of commitment.  We want to finish them.  But what, exactly do we want to finish?  The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does.  So does an electronic book have a beginning and an end?  Or is it simply a densely clustered set of texts with a well-defined path traversing them?  From the vantage point of 2010 this may seem like a faintly ridiculous question.  I doubt that will be the case in 2020, when perhaps half of our new books are electronic books.  The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book.  There is no way that the electronic book can remain apart, indifferent and pure.  It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader.  More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity.  Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe.  The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen.  This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear.  Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books.  (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole.  Once you’re on the wrong side you’re doomed to fall all the way in.)  On one side – our side – things look much as they do today.  Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical.  On the other side, electronic books rapidly become almost completely unrecognizable.  It’s not just the financial model which disintegrates.  As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform.  The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words.  Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each.  The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could.  Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both.  This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers.  We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on.  This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it.  Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once.  Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it.  With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book?  It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever.  This is not an ending, any more than birth is an ending.  But it is a transition, at least as profound and comprehensive as the invention of moveable type.  It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book.  Transitions are chaotic, but they are also fecund.  The seeds of the new grow in the humus of the old.  (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III:  Finnegans Wiki

So what of Aristotle?  What does this mean for the narrative?  It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts.  But what about stories?  From time out of mind we have listened to stories told by the campfire.  The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale.  For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this?  Can narratives stand up against the centrifugal forces of hypertext?  Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form.  The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind.  There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot.  A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force.  We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum.  It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books.  Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet.  How can any text hope to stand against that?

And yet, some do.  Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series.  Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination.  None of this is high literature, but it is literature capable of resisting all our alluring distractions.  This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc.  We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad.  That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed.  The first was taken by JRR Tolkien in The Lord of the Rings.  Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating.  Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it.  And although readers do finish the book, in a very real sense they do not leave that universe.  The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations.  Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books.  Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy.  Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers.  This is another direction for the book.  While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it.  (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext.  There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures.  But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas.  The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext.  That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power.  It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite.  The text is overloaded with meaning, so much so that the mind can’t take it all in.  Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant.  But there is another possibility.  In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed.  In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example.  As texts become electronic, as they melt and dissolve and  link together densely, meaning multiplies exponentially.  Every sentence, and every word in every sentence, can send you flying in almost any direction.  The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake.  As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture.  Everything will be there, all strung together.  And that’s what happened to the book.

The Alexandrine Dilemma

I: Crash Through or Crash

We live in a time of wonders, and, more often than not, remain oblivious to them until they fail catastrophically. On the 19th of October, 1999 we saw such a failure. After years of preparation, on that day the web-accessible version of Encyclopedia Britannica went on-line. The online version of Britannica contained the complete, unexpurgated content of the many-volume print edition, and it was freely available, at no cost to its users.

I was not the only person who dropped by on the 19th to sample Britannica’s wares. Several million others joined me – all at once. The Encyclopedia’s few servers suddenly succumbed to the overload of traffic – the servers crashed, the network connections crashed, everything crashed. When the folks at Britannica conducted a forensic analysis of the failure, they learned something shocking: the site had crashed because, within its first hours, it had attracted nearly fifty million visitors.

The Web had never seen anything like that before. Yes, there were search engines such as Yahoo! and AltaVista (and even Google), but destination websites never attracted that kind of traffic. Britannica, it seemed, had tapped into a long-standing desire for high-quality factual information. As the gold-standard reference work in the English language, Britannica needed no advertising to bring traffic to its web servers – all it need do was open its doors. Suddenly, everyone doing research, or writing a paper, or just plain interested in learning more about something tried to force themselves through Britannica’s too narrow doorway.

Encyclopedia Britannica ordered some more servers, and installed a bigger pipe to the Internet, and within a few weeks was back in business. Immediately Britannica became one of the most-trafficked sites on the Web, as people came through in search of factual certainty. Yet for all of that traffic, Britannica somehow managed to lose money.

The specifics of this elude my understanding. The economics of the Web are very simple: eyeballs equals money. The more eyeballs you have, the more money you earn. That’s as true for Google as for Britannica. Yet, somehow, despite having one of the busiest websites in the world, Britannica lost money. For that reason, just a few month after it freely opened its doors to the public, Britannica hid itself behind a “paywall”, asking seven dollars a month as a fee to access its inner riches. Immediately, traffic to Britannica dropped to perhaps a hundredth of its former numbers. Britannica did not convert many of its visitors to paying customers: there may be a strong desire for factual information, but even so, most people did not consider it worth paying for. Instead, individuals continued to search for a freely available, high quality source of factual information.

Into this vacuum Wikipedia was born. The encyclopedia that anyone can edit has always been freely available, and, because of its use of the Creative Commons license, can be freely copied. Wikipedia was the modern birth of “crowdsourcing”, the idea that vast numbers of anonymous individuals can labor together (at a distance) on a common project. Wikipedia’s openness in every respect – transparent edits, transparent governance, transparent goals – encouraged participation. People were invited to come by and sample the high-quality factual information on offer – and were encouraged to leave their own offerings. The high-quality facts encouraged visitors; some visitors would leave their own contributions, high-quality facts which would encourage more visitors, and so, in a “virtuous cycle”, Wikipedia grew as large as, then far larger than Encyclopedia Britannica.

Today, we don’t even give a thought to Britannica. It may be the gold-standard reference work in the English language, but no one cares. Wikipedia is good enough, accurate enough (although Wikipedia was never intended to be a competitor to Britannica by 2005 Nature was doing comparative testing of article accuracy) and is much more widely available. Britannica has had its market eaten up by Wikipedia, a market it dominated for two hundred years. It wasn’t the server crash that doomed Britannica; when the business minds at Britannica tried to crash through into profitability, that’s when they crashed into the paywall they themselves established. Watch carefully: over the next decade we’ll see the somewhat drawn out death of Britannica as it becomes ever less relevant in a Wikipedia-dominated landscape.

Just a few weeks ago, the European Union launched a new website, Europeana. Europeana is a repository, a collection of cultural heritage of Europe, made freely available to everyone in the world via the Web. From Descartes to Darwin to Debussy, Europeana hopes to become the online cultural showcase of European thought.

The creators of Europeana scoured Europe’s cultural institutions for items to be digitized and placed within its own collection. Many of these institutions resisted their requests – they didn’t see any demand for these items coming from online communities. As it turns out, these institutions couldn’t have been more wrong. Europeana launched on the 20th of November, and, like Britannica before it, almost immediately crashed. The servers overloaded as visitors from throughout the EU came in to look at the collection. Europeana has been taken offline for a few months, as the EU buys more servers and fatter pipes to connect it all to the Internet. Sometime late in 2008 it will relaunch, and, if its brief popularity is any indication, we can expect Europeana to become another important online resource, like Wikipedia.

All three of these examples prove that there is an almost insatiable interest in factual information made available online, whether the dry articles of Wikipedia or the more bouncy cultural artifacts of Europeana. It’s also clear that arbitrarily restricting access to factual information simply directs the flow around the institution restricting access. Britannica could be earning over a hundred million dollars a year from advertising revenue – that’s what it is projected that Wikipedia could earn, just from banner advertisements, if it ever accepted advertising. But Britannica chose to lock itself away from its audience. That is the one unpardonable sin in the network era: under no circumstances do you take yourself off the network. We all have to sink or swim, crash through or crash, in this common sea of openness.

I only hope that the European museums who have donated works to Europeana don’t suddenly grow possessive when the true popularity of their works becomes a proven fact. That will be messy, and will only hurt the institutions. Perhaps they’ll heed the lesson of Britannica; but it seems as though many of our institutions are mired in older ways of thinking, where selfishness and protecting the collection are seen as a cardinal virtues. There’s a new logic operating: the more something is shared, the more valuable it becomes.

II: The Universal Library

Just a few weeks ago, Google took this idea to new heights. In a landmark settlement of a long-running copyright dispute with book publishers in the United States, Google agreed to pay a license fee to those publishers for their copyrights – even for books out of print. In return, the publishers are allowing Google to index, search and display all of the books they hold under copyright. Google already provides the full text of many books which have an expired copyright – their efforts scanning whole libraries at Harvard and Stanford has given Google access to many such texts. Each of these texts is indexed and searchable – just as with the books under copyright, but, in this case, the full text is available through Google’s book reader tool. For works under copyright but out-of-print, Google is now acting as the sales agent, translating document searches into book sales for the publishers, who may now see huge “long tail” revenues generated from their catalogues.

Since Google is available from every computer connected to the Internet (given that it is available on most mobile handsets, it’s available to nearly every one of the four billion mobile subscribers on the planet), this new library – at least seven million volumes – has become available everywhere. The library has become coextensive with the Internet.

This was an early dream both of the pioneers of the personal computing, and, later, of the Web. When CD-ROM was introduced, twenty years ago, it was hailed as the “new papyrus,” capable of storing vast amounts of information in a richly hyperlinked format. As the limits of CD-ROM became apparent, the Web became the repository of the hopes of all the archivists and bibliophiles who dreamed of a new Library of Alexandria, a universal library with every text in every tongue freely available to all.

We have now gotten as close to that ideal as copyright law will allow; everything is becoming available, though perhaps not as freely as a librarian might like. (For libraries, Google has established subscription-based fees for access to books covered by copyright.) Within another few years, every book within arm’s length of Google (and Google has many, many arms) will be scanned, indexed and accessible through books.google.com. This library can be brought to bear everywhere anyone sits down before a networked screen. This library can serve billions, simultaneously, yet never exhaust its supply of texts.

What does this mean for the library as we have known it? Has Google suddenly obsolesced the idea of a library as a building stuffed with books? Is there any point in going into the stacks to find a book, when that same book is equally accessible from your laptop? Obviously, books are a better form factor than our laptops – five hundred years of human interface design have given us a format which is admirably well-adapted to our needs – but in most cases, accessibility trumps ease-of-use. If I can have all of the world’s books online, that easily bests the few I can access within any given library.

In a very real sense, Google is obsolescing the library, or rather, one of the features of the library, the feature we most identify with the library: book storage. Those books are now stored on servers, scattered in multiple, redundant copies throughout the world, and can be called up anywhere, at any time, from any screen. The library has been obsolesced because it has become universal; the stacks have gone virtual, sitting behind every screen. Because the idea of the library has become so successful, so universal, it no longer means anything at all. We are all within the library.

III: The Necessary Army

With the triumph of the universal library, we must now ask: What of the librarians? If librarians were simply the keepers-of-the-books, we would expect them to fade away into an obsolescence similar to the physical libraries. And though this is the popular perception of the librarian, in fact that is perhaps the least interesting of the tasks a librarian performs (although often the most visible).

The central task of the librarian – if I can be so bold as to state something categorically – is to bring order to chaos. The librarian takes a raw pile of information and makes it useful. How that happens differs from situation to situation, but all of it falls under the rubric of library science. At its most visible, the book cataloging systems used in all libraries represents the librarian’s best efforts to keep an overwhelming amount of information well-managed and well-ordered. A good cataloging system makes a library easy to use, whatever its size, however many volumes are available through its stacks.

It’s interesting to note that books.google.com uses Google’s text search-based interface. Based on my own investigations, you can’t type in a Library of Congress catalog number and get a list of books under that subject area. Google seems to have abandoned – or ignored – library science in its own book project. I can’t tell you why this is, I can only tell you that it looks very foolish and naïve. It may be that Google’s army of PhDs do not include many library scientists. Otherwise why would you have made such a beginner’s mistake? It smells of an amateur effort from a firm which is not known for amateurism.

It’s here that we can see the shape of the future, both in the immediate and longer term. People believe that because we’ve done with the library, we’re done with library science. They could not be more wrong. In fact, because the library is universal, library science now needs to be a universal skill set, more broadly taught than at any time previous to this. We have become a data-centric culture, and are presently drowning in data. It’s difficult enough for us to keep our collections of music and movies well organized; how can we propose to deal with collections that are a hundred thousand times larger?

This is not just some idle speculation; we are rapidly becoming a data-generating species. Where just a few years ago we might generate just a small amount of data on a given day or in a given week, these days we generate data almost continuously. Consider: every text message sent, every email received, every snap of a camera or camera phone, every slip of video shared amongst friends. It all adds up, and it all needs to be managed and stored and indexed and retrieved with some degree of ease. Otherwise, in a few years time the recent past will have disappeared into the fog of unsearchability. In order to have a connection to our data selves of the past, we are all going to need to become library scientists.

All of which puts you in a key position for the transformation already underway. You get to be the “life coaches” for our digital lifestyle, because, as these digital artifacts start to weigh us down (like Jacob Marley’s lockboxes), you will provide the guidance that will free us from these weights. Now that we’ve got it, it’s up to you to tell us how we find it. Now that we’ve captured it, it’s up to you to tell us how we index it.

We have already taken some steps along this journey: much of the digital media we create can now be “tagged”, that is, assigned keywords which provide context and semantic value for the media. We each create “clouds” of our own tags which evolve into “folksonomies”, or home-made taxonomies of meaning. Folksonomies and tagging are useful, but we lack the common language needed to make our digital treasures universally useful. If I tag a photograph with my own tags, that means the photograph is more useful to me; but it is not necessarily more broadly useful. Without a common, public taxonomy (a cataloging system), tagging systems will not scale into universality. That universality has value, because it allows us to extend our searches, our view, and our capability.

I could go on and on, but the basic point is this: wherever data is being created, that’s the opportunity for library science in the 21st century. Since data is being created almost absolutely everywhere, the opportunities for library science are similarly broad. It’s up to you to show us how it’s done, lest we drown in our own creations.

Some of this won’t come to pass until you move out of the libraries and into the streets. Library scientists have to prove their worth; most people don’t understand that they’re slowly drowning in a sea of their own information. This means you have to demonstrate other ways of working that are self-evident in their effectiveness. The proof of your value will be obvious. It’s up to you to throw the rest of us a life-preserver; once we’ve caught it, once we’ve caught on, your future will be assured.

The dilemma that confronts us is that for the next several years, people will be questioning the value of libraries; if books are available everywhere, why pay the upkeep on a building? Yet the value of a library is not the books inside, but the expertise in managing data. That can happen inside of a library; it has to happen somewhere. Libraries could well evolve into the resource the public uses to help manage their digital existence. Librarians will become partners in information management, indispensable and highly valued.

In a time of such radical and rapid change, it’s difficult to know exactly where things are headed. We know that books are headed online, and that libraries will follow. But we still don’t know the fate of librarians. I believe that the transition to a digital civilization will founder without a lot of fundamental input from librarians. We are each becoming archivists of our lives, but few of us have training in how to manage an archive. You are the ones who have that knowledge. Consider: the more something is shared, the more valuable it becomes. The more you share your knowledge, the more invaluable you become. That’s the future that waits for you.

Finally, consider the examples of Britannica and Europeana. The demand for those well-curated collections of information far exceeded even the wildest expectations of their creators. Something similar lies in store for you. When you announce yourselves to the broader public as the individuals empowered to help us manage our digital lives, you’ll doubtless find yourselves overwhelmed with individuals who are seeking to benefit from your expertise. What’s more, to deal with the demand, I expect Library Science to become one of the hot subjects of university curricula of the 21st century. We need you, and we need a lot more of you, if we ever hope to make sense of the wonderful wealth of data we’re creating.