Keynote for the Digital Fair of the Australian College of Educators, Geelong Grammar School, 16 April 2009. The full text of the talk is here.
I: The Universal Solvent
I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.
When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.
This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.
Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.
Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.
All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.
The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.
That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.
Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.
II: Fluid Dynamics
If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.
When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?
At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.
In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.
The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.
The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.
This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.
III: Digital Citizenship
Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.
This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.
But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.
But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.
In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.
I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.
Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.
Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.
Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.
Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.
My brief keynote to the ICT Roundtable of the TAFE Sydney Institute. Recorded on Wednesday, 13 August 2008. Many thanks to Trish James and Stephan Ridgway for arranging the audio recording!
I. The Wheels Fall Off the Cart
In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.
This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.
Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.
Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.
In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.
This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.
In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.
Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.
Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.
It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.
Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.
At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.
BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.
That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.
It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.
In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.
But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.
II. The Barbarian Hordes Storm the Walls
Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.
Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.
This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.
In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.
Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.
For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.
This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.
That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”
When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.
There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.
This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.
The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.
What then, is to be done?
III. And The Penny Drops
It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.
This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.
In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.
At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.
Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.
The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.
Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.
I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.
Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.
What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.
None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.
So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.
One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.
Few terms convey less meaning than “futurist.” What exactly is a futurist? What does he do? The definition, so far as I chose to apply it, is simple: a futurist looks at the present, at human behavior and human tendencies, to imagine how these trends develop. This is less science than storytelling; the development of any human endeavor is fraught with non-linear events, which yank the arrow of the progress this way and that. One can never know the future with any precision, and the farther the future recedes down the light-cone, the less distinct it becomes. We might know with high accuracy what will happen tomorrow. But five years from now, or twenty? That’s more alchemy than anthropology.
Yet, in order to play the game, futurists must make predictions. It’s what we do. So, for those few futurists who are willing to take the big risks of making short-term predictions – ranging from twelve to thirty-six months in the future – the game is particularly dangerous. Any futurist can predict what will come to pass in twenty years’ time, because no one will remember how wrong they were. But to make a prediction for the near term risks being revealed as a charlatan. Such predictions must be considered carefully, revealed hesitantly, and pronounced provisionally. Doing that will give you an out later on. Yet I have never been one to be either hesitant or provisional; I leap in where braver (and, arguably wiser) souls fear to tread. My particular brand of futurism – the “futurest” – is expansive, encompassing, and uncompromisingly revolutionary. I say this not to tout my strengths, but rather, to reveal the dangers.
In the early 1990s I predicted that VR would become the standard interface metaphor for computers by the 21st century. Did I get that right? It seems not; after all, we still use windows and mice as standard the interaction paradigm, just as we did back in 1990. Yet, if we can draw anything from the recent and somewhat surprisingly successful introduction of the Nintendo Wii, it’s that VR did arrive, is pervasive, and has become a dominant interface metaphor. Just not on the computer desktop. VR isn’t about head-mounted displays, although it might have seemed so, fifteen years ago. VR is about bringing the body into contact with the simulated world. Nintendo, with its clever, cheap, attractive and highly functional Wiimote, has done just that. They’ve done what decades of other researchers and engineers failed to do: they’ve brought us into the game. So predictions might come to pass, but rarely do they come in the form imagined. But every so often, when you step up to the plate, you connect completely, and knock one out of the park.
In early December 2005 I was invited to give a plenary presentation to the Australian conference on Interaction and Entertainment Design. This was one of the rare opportunities I get to talk on any subject I desired. Most of these academics wanted to talk about the latest trends in gaming and online communities; having been through that, and more, a decade ago, I decided to take the conversation in an entirely different direction, by focusing on that most common of our electronic peripherals, the mobile phone.
So common as to be nearly invisible, the mobile phone has become the focal point of our social existence. Yet, despite its constant presence, the mobile seemed poorly fit to the task of being our perpetual servant. It seemed stuck in an liminal position, between the wired world and the pervasive networked environment which is the global reality of the 21st century. The mobile was broken, and needed to be fixed. Hence, working with Angus Fraser, my graduate student – who, on his own, has had years of experience developing interfaces and applications for mobile phones – I wrote “The Telephone Repair Handbook”. I started off by challenging the audience to answer three questions:
- Q: Why does a mobile phone have a keypad? We never use it.
A: Because wired phones have keypads. And so we can enter text. Badly.
- Q: How many networks are our mobile phones really connected to?
A: The answer is generally at least three: GPRS/GSM, Bluetooth and IrDA.
- Q: What are our phones doing all the time they’re idle?
A: Nothing. They’re just waiting for a phone call or a text message to make their day.
These basic failures in the design of the mobile phone, I argued, arose from our fundamental misunderstanding of the function of the device. Mobile phones are not simply passive terminals, waiting to be activated. They are (or rather, should be) active communications processors, managing the minutiae of our social relationships.
Once I’d set up the straw men, and knocked them down, I described a new kind of mobile phone, designed from the outset to be a communications servant, a nexus which tracked, facilitated and recorded all of the social interactions happening through it, or, via Bluetooth, proximal to it. And, because I can code, I demonstrated the very first version of Blue States, a small Java J2ME application which allowed mobile phones to note and record the presence of other Bluetooth devices in their immediate proximity. This information, I insisted, could be come the foundation of an emergent social network. The mobile, at all times with you, or nearby, knows your social life better than you do. When exposed, and analyzed, this data becomes a powerful tool. Angus and I worked up a few user scenarios to demonstrate our point: the mobile can be so much more. All it needs is the right software. I finished by encouraging this room of researchers to re-invent the mobile phone, to make it the digital social secretary, the majordomo, and grand vizier.
A month after I gave that presentation, I left my teaching position, and began coding, full-time, on Blue States, readying it for its first deployment, at ISEA San Jose. As an art project at an art festival, it might influence the creative minds of electronic artists. Perhaps they would begin to pervert their own mobile phones, transforming them into something entirely more useful.
As it turns out, I didn’t have to wait for the artists to catch up with me. For it seems that even as I was beginning my research work, more than two years ago, and formulating my theories on the future of the mobile telephone, another group of researchers set to the same task, and came to many of the same conclusions.
Yesterday, on a stage in San Francisco, Steve Jobs, CEO of the now-renamed Apple, Inc., introduced the iPhone, Apple’s much-rumored and long-awaited convergence device. Three things must be noted as essential to the design:
- It has no keyboard.
- It is connected to wireless internet, Bluetooth and GPRS/EDGE networks simultaneously, and moves between each seamlessly.
- It has a sophisticated operating system, and is constantly executing several tasks at once. It is never truly idle.
The iPhone is a combination of an iPod and a mobile telephone, and these elements have been fused together with a fingertip-based user interface to make the device nearly as tactile and natural as any familiar object. It is a mobile phone, but it has – finally and rationally – lost its vestigial connections to the wired phone. It is not simply a wireless phone; it is a network terminal, with all that implies. That it has a true operating system – instead of the “toy” operating systems of earlier mobile phones, which are cranky, and which crash all too often – means that programmers can harness the capabilities of the device wholly, taking it into directions that its creators at Apple never intended. This is not simply an iPod, or a mobile phone, but a complete redefinition of the device. This, quite simply, is the future, as I predicted it, thirteen months ago.
Will the iPhone succeed? No one yet knows. The device is both new enough and different enough that significant changes in user behavior must follow in its wake. Like the Macintosh with its Graphical User Interface, this transformation might take a decade to become the dominant interaction paradigm. Or – given the level of hype and excitement seen in the media in the last twenty-four hours – it might be the right device, at the right time. It may be that Apple has told the world not only why the telephone must be reinvented, but has show it how it should be done. If they have, the iPhone will make the iPod look like a weak overture. Copies and clones will proliferate, skirting to the edge of every one of Apple’s two hundred iPhone patents. And people will begin to have very different expectations for their mobile phones.
While the iPhone both excites and dazzles me with its ingenuity, design and inventiveness, I am not completely satisfied with it. It is still a phone, an iPod, and an “internet communicator” rolled into one. It is not, in any true sense, wholly integrated. There is no way for my friends in San Francisco, with their iPhones, to know what my favorite songs are, or what I’m listening to at the moment, or what I’m reading on the web, or who I’m texting. It is halfway to the social device which I see as the inevitable end point. But the rest is just software. The hardware platform is there, ready and waiting, and will be disrupted by a dozen innovations that no one can yet predict. But I do predict they will happen, in the next twelve to thirty-six months.
Although Apple introduced its Video iPod at the end of 2005, this is the year when video begins to take off. Everywhere. The sheer profusion of devices which can play video – from iPods to desktop and laptop computers to Sony’s Playstation Portable, the Nintendo DS, and nearly all current-generation mobile phones – means that people will be watching more video, in more places, than ever before. You may not want to watch that episode of “Desperate Housewives” on your iPod – unless you happened to be tied up last Monday evening, and forgot to program your VCR. Then you’ll be glad you can. Sure, the picture is small and grainy, the sound’s a bit tinny, and your arms will get tired holding that screen in front of your face for an hour, but these drawbacks mean nothing to a true fan. And the true fans will lead this revolution.
We’re growing comfortable with the idea that screens are everywhere, that we can – in the time it takes to ride the train to work – get caught up on our favorite stories, the last World Cup match, and the news of the world. A generation ago it seemed odd to see someone in public wearing earphones; today it’s a matter of course. This afternoon it might seem odd to see someone staring into their mobile phone; tomorrow it will seem perfectly normal.
Now that video is everywhere, it won’t be long until the business of television moves online. Already, Apple has sold close to ten million episodes of television series like “Lost” and “The Office”. Google wants to sell you episodes of the original “Star Trek”, “The Brady Bunch” and “CSI”. For television producers it’s a win-win; they’ve already sold the episodes to broadcast networks – generally for a bit less than they cost to make – so the online sales are extra and vital dollars to cover the gap between loss and profit.
Today only a few of the hundreds of series shown in the US, UK and Australia are available for sale online. By the end of this year, most of them will be. Will the broadcast networks like this? Yes and no. It deprives them of some of the power they hold over the audience – to gather them at one place and time, eyeballs for advertisers – but it also creates new audiences: people see an episode online, and decide to tune in for the next one. That’s something we’ve already seen – “The Office”, for example, spiked upward in broadcast ratings after it was offered online. This year, there’s likely to be another breakout television hit – a new “Lost” – which starts its life online.
Once video is everywhere, once all our favorite television shows are available online for download, we’ll learn something else: there’s a lot more out there than just those shows produced for broadcast. On sites like Google Video and YouTube, you can already download tens of thousands of short- and full-length television programs. Some of them are woefully amateur productions, the kind that make you cringe in horror, but others – and there are more and more of these – are as funny and dramatic as anything you might see on broadcast television. Think TropFest – but a thousand times bigger.
Once we get used to the idea that television is something they can download, we’ll find ourselves drawn to these other, more unusual offerings. Most of this fare isn’t ready for prime-time. Much of it is only meant for a tight circle of friends and aficionados. But some of it will break through, and get audiences in the millions. It’s already happened a few times in the last year; this year it will become so common that, by the end of 2006, we’ll think nothing of it at all. This thought scares both the broadcast networks and the commercial TV producers. After all, if we’re spending our time watching something created by four kids in Goulburn, that’s time we’re not watching commercially-produced entertainment. And how do the networks compete with that?
This fundamental transformation in how we find and watch entertainment isn’t confined to video. It’s happening to all other media, simultaneously. More people listen to the podcasts of Radio National than listen to the live broadcast; more people read the Sydney Morning Herald online than read the print edition. And these are just the professional offerings. As with television, each of these media are facing a rising sea of competition – from amateurs. Apple offers tens of thousands of podcasts through its iTunes Music Store – including Radio National – on just about any topic under the sun, from the mundane to the truly bizarre. You can get “feeds” of news from Fairfax – headlines and links to online versions of the stories – but you can also get that any of several thousand news-oriented blogs. Click a few buttons and the news is automatically downloaded to your computer, every half hour.
As it gets easier and easier for us to choose exactly what we want to watch, hear and read, the commercial and national broadcasters find themselves facing the “death of a thousand cuts.” Every pair of ears listening to a podcast is an audience member who won’t show up in the ratings. Every subscriber to an “amateur” news feed is a subscriber lost to a newspaper. And this trend is just beginning. In another decade, we’ll wonder how we lived without all this choice.
Choice is a beautiful thing. We define ourselves by the choices we make: what we do, who we know, what we fill our leisure time with. Now that our media is everywhere, available from everyone, any hour of the day or night, we’re going to find ourselves confronted by an unexpected problem: rather than trying to decide what to watch on five terrestrial broadcast channels – or fifty cable channels – we’ll have to pick from an ocean of a million different programs; even if most of them aren’t all that appealing, at least a few thousand will be, at any point in time.
That kind of choice will make us all a little bit crazy, because we’ll always be wondering if, just now, something better isn’t out there, waiting for us to download it. Like the channel surfer who sits, remote in hand, flipping through the channels, hoping for something to catch his eye, we’re going to be flipping through hundreds of thousands and then millions of choices of things to watch, hear and read. We’re going to be drowning in possibilities. And the pressure – to keep up, to be informed, to be on the tip – is about to create the most savvy generation of media consumers the world has ever seen.
We’re drowning in choice, but, because of that, we’ll figure out how to share what we know about what’s good. We already receive lots of email from friends and family with links to the best things they’ve found online. That’s going to continue, and accelerate; our circles of friends are becoming our TV programmers, our radio DJs, our newspaper editors, and we’ll return the favor. The media of the 21st century are created by us, edited by us, and broadcast by us. That’s a deep change, and a permanent one.
Consider the lowly VCR. Once the king of the consumer electronics roost, the Japanese giant Matsushita has stopped manufacturing them in favor of DVD players. Unless they’re combined with a DVD player, most people have stopped buying them. I haven’t bought one in Australia, despite the fact that I need one for work, because I am regularly given video briefs for review, inventions to be presented on THE NEW INVENTORS. But somehow I can’t bring myself to spend the $100 on a VCR. Is that because I’m cheap? Hardly. It’s because I think VCRs suck – and I’m sure most of you would agree. They’re low-resolution, finicky, and nearly impossible to program. Yet, despite all these obvious drawbacks, VCRs changed the world.
In the time before the VCR, the television set was nothing more than a radio-wave tuner connected to an analog monitor. The television could only show programs as they were broadcast. Nothing else. Suddenly, the VCR enabled people to record broadcasts for later playback, or play pre-recorded cassettes. The VCR introduced the concept of “time-shifting” (though that term didn’t emerge until quite recently), and freed the audience from the hegemony of the broadcaster. This was such a catastrophic change that court battles were fought over it: the United States Supreme Court, ruling in the Sony “Betamax” decision, allowed that the VCR could be sold legally, even though time-shifting a television program constituted a violation of copyright – and still does, here in Australia. (The legal status of time-shifting in the United States is vague.)
While time-shifting moved power away from the broadcasters and into the audience, it also created a huge market for pre-recorded entertainment. Theatrical release provided one hundred percent of studio revenues in 1954. By 2004, that figure was down to 15%. It seems that audience choice is good economics; when you empower audience viewing habits, you dramatically increase the overall market.
By the late-1980s, as the studios saw incredible revenues flow in from pre-recorded videocassettes, they got together to promote a format which would have all of the advantages of the VCR, with none of its disadvantages. This format would provide a near-cinema-quality experience, but would be a read-only format. Consumers would be given greater choice, but only from a pre-produced collection of offerings. DVD, like the VCR before it, has become another biggest success story in consumer electronics. At least 75% of all households in Australia have at least one DVD player, and they’re now standard equipment on nearly all personal computers. The studios earn more – often far more – from DVD sales than from the theatrical release of their motion pictures. The DVD has driven the VCR out of the living room, just as the CD player obsolesced the turntable, fifteen years ago.
Nothing comes for free. The qualities that made the VCR, and the vinyl album before it, so annoying (noise, scratches, and just entropy in general) are the same qualities which made it a “safe” medium, so far as copyright protection was concerned. When the music industry transitioned from waves to bits, they unknowingly unleashed the engine of their own destruction. Waves are difficult to copy faithfully; every copy introduces noise and distortion. Bits can be copied perfectly every single time. Bits can be compressed and distributed at the speed of light. When digital music met the Web, back in 1993, the Internet Underground Music Archive, a small site running out of the University of California, Santa Cruz, everything changed. Suddenly, anyone could publish music, or download music, to anyone, anywhere. The combination of digital music plus the World Wide Web produced a resonance of sorts, a “sweet spot” which initiated a transformation that continues to this day, with over 42 million iPods and countless other digital music devices. Within this transformation there are countless secondary sweet spots – such as the iPod itself, and Apple’s iTunes Music Store – moments where technology and design meet in glorious union, producing prodigious amounts of heat and light. Like a spark to petrol, when design meets capability, the results can be explosive.
Like the music industry before them, the studios are confronting the cost of their transition from waves to bits. A DVD provides four times the picture quality of a VHS recording, together with 5.1 surround sound. It performs this magic by encoding a very high-bandwidth video signal into a relatively low-bandwidth data stream. This was high magic back in 1991, when the MPEG-2 standard was developed. Now it’s old tech. You can now squeeze a two hour movie into one-tenth the space, with no loss in quality. And that has changed everything about how we use video.
The first folks to realize this were a group of engineers who’d broken away from Silicon Graphics after working on Time-Warner’s Full Service Network, better known as “The Orlando Project.” This test bed (in Orlando, Florida) wired 1500 homes to very high-speed cable modems, and each home connected to the service through their own $60,000 Silicon Graphics workstation. The goal of The Orlando Project was to develop the future of video delivery – in other words, the system which would replace the analog cable systems which had by then fully penetrated the US market. Years ahead in interface design, The Orlando Project fully employed the 3D capabilities of the SGI workstation to create something known as “The Carousel,” which allowed home users to select from about 500 different offerings. (At the time, this was an order of magnitude more than any competitive offering.) The design of The Carousel – spearheaded by Dale Herigstad, who would go on to design the interface for Microsoft’s Media Center, and its Xbox 360 – attempted to guide the user through a bewildering set of video selections in a straightforward manner. While consumers liked The Carousel, Time-Warner cancelled the project to focus on other, less costly digital cable ventures. The engineers at Silicon Graphics, intrigued by what they’d started, soon left to form their own company.
In 1999 the Full Service Network bore unexpected fruit. TiVO, the company founded by those refugees from SGI, introduced its first “personal video recorder.” The idea of recording video to a hard drive for later playback was not new; electronic program guides had been used by cable companies for years. Yet, when these two technologies were integrated around an exceptionally well-designed user interface, another resonance struck, and a sweet spot appeared, one which is utterly transforming the way we think of video. People who could never hope to program a VCR have bought TiVOs in droves, recording all their favorite programs, and watching, on average, 60% more television than individuals who don’t have TiVOs. However, TiVO makes it exceptionally easy to fast-forward through commercial breaks, which is a plus for the audience, but a big concern to the broadcaster. By 2009, there’ll be at least a 30% drop-off in eyeballs watching TV advertisements, all because of TiVO and its many imitators. But the “TiVO effect” is far more profound. TiVO has disconnected any relationship between the network and the audience. The audience is watching a personalized stream of programming, one which bears no fundamental relationship to its source.
I discovered this TiVO effect when one of my friends – who has owned a TiVO for five years – recommended that I watch Making the Band: INXS. I asked him what network it was on. He thought for a long moment, and then said, “I have no idea.” After such along period of time with TiVO, the idea of broadcaster and programming have disassociated; it’s all just programs, on his TiVO. TiVO has become the broadcaster.
This transformation in audience behavior wrought by TiVO points up an essential relationship between design and technology: where they meet in harmony, they produce a new medium. TiVO is the medium, and “the medium is the message.” TiVO has fundamentally changed the relationship between audience and programming; now that TiVOs are broadband-connected, they don’t even need television receivers. TiVOs could download programming directly from the Internet, or take recorded programs, and transmit them to anywhere on the Internet. The latest of TiVO’s competitors, the Slingbox, does this perfectly. I can connect a Slingbox at home in Surry Hills and watch any programming it has recorded, anywhere in the world. Not only have I disconnected the programming from the broadcaster, I’ve cut the cord to my television set. Now my television is anywhere I might be.
Still, TiVO and Slingbox have clung to the idea that there is a content source – that is, the television broadcaster – and an audience hungry for that content. That’s no longer true. With the recent advent of the Video iPod, the iTunes Video Store, Google Video, YouTube, and the ever growing influence of peer-to-peer file-sharing networks, the balance of content is shifting away from the broadcasters to the “peer-productions” of the audience.
This is the revolution that’s waiting to happen. Right now there is no easy way for your average television viewer to find and view the enormous range of content that’s out on the Internet. File-sharing networks are either illegal, dangerous or too difficult for the average audience member to master. Google Video and YouTube must be viewed on a computer. None of the pieces fit together. Yet. And although the Video iPod can be plugged into a television set, very few people do it. It’s still too clumsy.
There is a resonance here, something that’s just on the cusp of happening. Someone (and it could well be Apple) will find a way to tie the television into the Internet meaningfully, formally breaking the bond between the television-as-radio-receiver and television-as-output-device. When that happens, the meaning of television channels and broadcasters will begin to fade into significance. We’ll still watch broadcasts of live events – such as news or sport – but otherwise our televisions will be portals into the ever-increasing supply of peer-produced programming. All we need to do is locate the sweet spot, the harmonious meeting point between design and technology.
It’s widely believed that technology is not informed by design disciplines. Nothing could be further from the truth. Without design, technology remains locked into a culture of expertise. Design-led technologies – such as TiVO and the iPod – transform our expectations and our behavior. Technology alone can not do that. It hasn’t the capability. We need to adjust our thinking. Design is not the handmaiden of technology. It’s the other way around. Design must be in the driver’s seat. Without the resonance which brings mind and hand together meaningfully, all we’ll ever have is unrealized potential. When design drives technology, when we assert that human needs trump raw capability, we create the artifacts which change the world.