My brief keynote to the ICT Roundtable of the TAFE Sydney Institute. Recorded on Wednesday, 13 August 2008. Many thanks to Trish James and Stephan Ridgway for arranging the audio recording!
I. Get Off My Lawn!
To say that we’re living in a time of accelerated change is a truism. What we forget – because it would scare the hell out of us – is exactly how much change we’ve seen. I moved to Australia 4 ½ years ago. When I got here there was no YouTube, no podcasting, no BitTorrent, no Wikipedia (in a practical sense). And no MySpace, FaceBook, Bebo, or Twitter.
These are things that I, in my daily life, take for granted. But they’re absolutely brand new. I’m not quite sure how we manage to fool ourselves into believing this is all perfectly normal.
Of course, there is one group of people for whom this is perfectly normal, because they’ve never known anything else – those wacky kids. Consider: I had my very first tour of the World Wide Web at SIGGRAPH, the big computer graphics conference, in Anaheim, California, back in July, 1993. I moused around the recently-released NCSA Mosaic on a hundred-thousand dollar graphics workstation.
I already knew what hypertext was. I had already written a Macintosh-based hypertext system, just before Hypercard made it completely irrelevant. I knew what I was looking at. And I wasn’t very impressed. Sure, it was hypertext, but there were only a handful of sites to visit.
Eventually, the penny dropped. A few months later I bought a used, huge and heavy SPARCStation, set it up in my lounge room, strung a phone cable across my flat, SLIPped into the Internet, launched NSCA Mosaic, and started surfing. Every night when I got home from work, I surfed some more. And, at the end of that very enjoyable week in mid-October 1993, I was done. I had surfed the entire Web.
If you said that today – that you’d surfed the entire web of a hundred million discrete domains and a hundred and fifty million individual blogs and – who knows? – maybe twenty billion pages – people would either believe you a liar or mad as a cut snake.
And yet, a child, born in July 1993, when I first clicked on a Web link, would just be coming up to her 15th birthday. Probably in the middle of year 10.
For that fifteen year-old, change is the only constant she’s known. All the world has changed. All of culture and human behavior have changed – in some ways we are unrecognizable. Because we are embedded in this change, we only feel the acceleration. To someone whose baseline experience, their entire lifetime, has been this continuous acceleration, there is no sensation at all.
People talk about digital immigrants and digital natives. But that’s too simple. It’s an injustice to the truth of the matter – a truth which is important for us to understand.
In the late 1990s, I’d gotten a sense of what was going on, and wrote a book to usher parents into the world of their children: The Playful World: How Technology is Transforming Our Imagination took a look at three areas – intelligence, activity and presence. For each of these domains of human experience, I selected a toy – the Furby, Lego’s MINDSTORMS, and the Sony PlayStation2, respectively – as the starting point for an explanation of this startling shift in the inner lives of children.
Let me be clear: I am a strict Constructivist. I believe that children learn through interactions with their environment. I had come to realize that the environment for a child born at the turn of the millennium looked nothing like the world of 1962, the year I was born.
The world is intelligent. The world responds. The world allows us to extend our senses globally – as with Google Earth – or down to the nanoscopic. All of this – all of it – was already showing up in children’s toys! Not in fancy labs, but in toys. And it’s still going on today. The Nintendo Wii is a better bit of virtual reality than anything ever created by NASA.
How can a kid who plays tennis with a virtual racket, or bowls with a virtual ball, ever hope to have the same cognitive relationship to the world of things that I do?
We’re not even on the same planet.
And we forget this. Or rather, we refuse to see it. But we can’t avoid it any longer, because all this tech has turned this sub-15 generation into mutants with strange new powers.
Let’s come back to that 15 year-old, who, of course, owns a mobile phone. What does she do with it? Those of you with teenage children already know the answer: she texts. Continuously.
Mizuko Ito, a Japanese researcher, studied teenagers in Japan a few years ago, and found that these kids – from the moment they wake up in the morning, until they drop off to sleep at night – are enaged in a continuous and mostly trival conversation with, on average, five other friends. They might be in the flat next door, or on the other side of Tokyo. Proximity doesn’t matter. What does matter is the constant connection. Ito named this phenomenon “co-presence”. It seemed a bit too science-fiction wacky-technophile Japanese, at the time.
Today, it’s the standard operating procedure for all teenagers everywhere in the developed world.
That typical 15 year-old will blow her prepay budget on texts, up to a hundred a day – which works out to about 6 every waking hour – and then, as the credit runs out, and the flow of messages stops, friends will check MySpace, where the 15 year-old has gone, to message for free, and so the flow of co-presence continues.
In some ways, this looks like a new thing, but in reality, it isn’t. It’s an old thing – a very old thing – expressed in an entirely new way.
All of this comes down to what we really are: social animals. That means we live to communicate, and we appear to be better at communication than any other species on the planet.
What we’ve done is given those wacky kids the tools to free this communication, so that it is no longer bound in space and time. We’ve accelerated communication to the speed of light. And all of this is perfectly natural to them.
This much we know.
It’s the unintended, unexpected, unpredictable consequences of all this “hyperconnectivity” which are really putting the screws to us. This is the new stuff. The things that are coming at us from our blind spot.
Consider: 11 December 2005, Cronulla Beach, and every Anglo-Celtic White Supremacist in New South Wales has a text message in hand, forwarded from fellow traveler to fellow traveler, asking them to lend a hand in the beat-down of the Lebs.
That’s what happens when you connect everyone together.
Consider: Also in December 2005, Nature published a peer-reviewed article which stated that Wikipedia, the peer-produced encyclopedia made possible by the fact that half a billion people can connect to it and contribute to it (and, through it, to each others’ thoughts and expertise), is very nearly as accurate as that gold-standard reference work, Encyclopedia Britannica.
Twist the dials one way, you get Cronulla. Twist them another, and you get Wikipedia.
And we’ve given those wacky kids the dial.
II: Those Well-Meaning Adults
These “hyperconnected” and ever-more wacky kids get up in the morning, put on their uniforms and go to school. When they get there, they’ve got to turn off their mobiles, put away their iPods, close the chat windows, unplug themselves from the webs of co-presence which shape their social experiences, sit still and listen to teacher.
And they’ve got to do this inside of an environment – the classroom – which is so thoroughly disconnected from the rest of life as they have always known it that it must, deep in their co-present souls, resemble nothing so much as a medieval torture chamber. An isolation tank. Solitary confinement.
It’s not just that school is a pain in the ass. It’s that it looks – to them – like a completely unrealistic pain in the ass, one which is out of step with the world beyond the classroom walls. It’s as if, every morning, these kids are marched into a time machine which transports them back to 1955.
It’s always important to recognize the hidden elements in any curriculum. The modern school was created not only to produce a literate workforce, but one which understood schedules and timelines, essential elements in the industrial era. Bells and periods trained students in the implicit curriculum. They learned to be timely and orderly, while they explicitly learned their letters and numbers. These curricula – explicit and implicit – fit the needs of the industrial age, and so were highly successful.
So, kids today, stripped of their hyperconnectivity as they walk through the school house door, learn that while timeliness is important, the ability to communicate, to collaborate, share and participate – across time and distance – are not. Oh, we can have practice exercises and whatnot which help to encourage those capabilities, but the hidden curriculum of our schools implicitly denies the value of this experience –the greater part of life experience for those wacky kids.
The trouble with this state of affairs is that it directly contradicts the world these kids have always lived in. In the industrial age, they saw their fathers leave home in time for the morning shift, and return home when that shift was completed. Their experience of regimented time within school perfectly agreed with life at home.
These days, those two worlds have almost nothing in common. Parents work flextime, they telecommute, work all hours of the day or night, across nations, across time zones, across disciplines. Work has changed. Home life has changed. School has not.
This is a very dangerous state of affairs, because in this subtle and invisible argument between school and life as it is really lived, life is always going to win.
What this means, in a practical sense, is that students have lost respect for the classroom, because it has no relevance to their lives. Yes, they will be polite – as they’re polite to their grandparents – but that is no substitute for a real working relationship. School will be endured, because parents and state mandate it. But it’s a waiting game.
This is not the right way to create the next generation of Australia’s leaders. This is only going to create a generation who have learned how to be patient, patronizing, and excel in the art of ass-kissing.
Australia is not alone in this. In the United States, the No Child Left Behind program, the very epitome of Industrial Age methodology, simply subjects students to assessment after meaningless assessment. Students train for tests. There are no more exploratory moments. Learning is by rote. Asia and Europe fare no better. Everywhere, everything is exactly the same — and exactly wrong.
What, then, is to be done?
It’s not as though educators and educational administrators are entirely unaware of this increasing desynchronization between the classroom and the world beyond it. Far form it. Just like those wacky kids, they live in both of these worlds, and they sense that the classroom has become an antique, a museum piece.
But they don’t know what to do about it.
That’s not to say they’re not grasping about for solutions. They are. The plan to get computers into secondary school classrooms throughout Australia is such an attempt. But no one has thought through what these computers will be used for, once they arrive on students’ desks. The Prime Minister, during the election campaign, uttered a few lines about maths drills and language exercises.
Probably Kevin Rudd should have sat and watched his own 14 year-old son as he goes online, or plays games on his Xbox, or texts his mates, to get a sense of the real value of all this hyperconnected technology.
Instead, Rudd relied on the opinions of educational experts, individuals who likely got their post-graduate degrees before there was a World Wide Web.
Hence: give the schools computers, but make them so dull, so meaningless, that the students are guaranteed to recoil in horror.
I have a better idea. Perhaps a school in Queensland can link up with a school in France, so that students learning English in France and students learning French in Australia can talk with each other, in foreign tongues. There are plenty of cheap technologies, like Skype and iChat AV, which can be used for that sort of thing.
Or, how about this: students in Victoria learning about the Eureka Stockade Rebellion might focus on a particular participant, and build a fully-researched and peer-reviewed article for Wikipedia. Teachers can go in and look at the history and discussion pages associated with the article to assess their students’ progress.
This isn’t about computers, folks. It’s about what we use computers for. And it’s about an educational administration that does not recognize that the computer, at its very best, is a window that opens up to other people. It is not a robot that drills students into submission.
All of this is light-years away from any curriculum in practice today. Yes, there are experiments – a few brave teachers and administrators sticking their necks out, tall poppies trying to make their classrooms relevant to the world outside. But these are just experiments.
Teachers are already so overworked, so time-poor, and, sometimes, so hide-bound, that technology is too frequently seen as a disruption. Actually, it’s the classroom that’s the disruption. What they see as a disruption is the outside world, clamoring to be let in.
The situation is bound to get worse before it gets better. The tabloid media are full of frightening stories of those wacky kids, inviting all their Facebook mates to come by and party, or MySpace suicide pacts, or cyber-bullying on YouTube.
And I say this knowing full well that I’m one of the pushers.
Although the schools need this technology, this window opening onto the real world, it is, at the same time, a profound threat to the comfortable, tried-and-true ways of doing business. When the computer salesman knocks on the door, they hear the rising winds of a storm that threatens to blow the classroom walls away.
So, something that should be an absolute no-brainer is turning out to be a very hard sell. People – teachers, administrators, parents and politicians – are afraid. When people are afraid, psychologists tell us, they put off making important decisions. They postpone change.
Those well-meaning adults, who really only want to get Australia’s next generation ready for a world that looks nothing like what they expected, are frozen in place, like Bambi in the headlights.
This will not do. It will not do for the kids. It will not do for the nation. And it will not do for you.
III. Breaking Through
Now, truth be told, I’m preaching to the converted. The reason you’re here in this room this morning listening to me rant and rave about those wacky kids and those well-meaning adults is because you want to be part of the solution. You’re voting with your feet. You understand that it’s important we do something – and do it quickly.
But we’re the mutants. We’re the ones who are out-of-step with the educational establishments in the states and the Commonwealth. We watch, with mixed degrees of amusement and horror, as the educational machinery shudders along, even as it groans under the increasing weight of the world outside. And we start to wonder – seriously – when it will all just collapse.
No one likes to set a deadline on these sorts of things; all deadlines inevitably fail. But I’d say that if we aren’t well on our way to transforming education within the next few years, the tide of the times could simply whip past us, and leave the educational establishment in a backwater, an eddy, while the rest of culture and civilization zips away downstream.
But, even as I say that, I reckon such an outcome to be very unikely. There’s too much pressure, coming from too many points, for education to get off that easily. It’s too important to be ignored or cast aside. Instead, the pressure will continue to rise, as the most extraordinary and unexpected things begin to happen. In fact, this is already happening.
I’d like to tell you a story about my colleague Stephen Collins, who lives and works down in Canberra. His story is a good example of how things are changing so quickly and so unexpectedly. But, before I tell you his story, I need to tell you the story of how I know Stephen Collins, because that will tell you something about just how fast things are moving right now.
Last year I signed up for a new Web service known as “Twitter”. Twitter bills itself as a “social message service” – sort of a cross between a social network (like Facebook or MySpace) and the short message service (or SMS) that we’re all completely familiar with. When I signed up to Twitter, I could elect to “follow” certain other people – that is, my friends, and colleagues, and so forth. When ever any of these people sends a “tweet” – that is, a 140-character message – I receive it, as do all of their followers. I might receive that tweet via the Twitter website, or one of the growing number of Twitter programs, or I can even have it delivered via SMS to my mobile.
I didn’t use Twitter very much for the first several months; there weren’t that many people using it, and weren’t that many folks to follow. So I ignored it. But, just in the last six months, a lot of people in Australia have discovered Twitter – particularly those folks who, like myself, are interested in what’s up-and-coming on the Web. Nearly all of those folks use Twitter these days, and most of them follow one another. I quickly got swept up into this madness, and am now very well “hyperconnected” with a few hundred core Twitter users in Sydney and throughout the nation.
The vast majority of tweets range from the minor to the inane. It’s like cocktail party chatter – often funny, but just as often, meaningless. But, once in a while – and more frequently, these days – there’s a point to all this incessant tweeting. For example, the Sichuan earthquake of Monday 12 May was reported by Twitterers in China a full thirty minutes before it made its way into the media. The folks who felt the temblor reported and shared their reports. Through Twitter, I knew about the earthquake an hour before most other Australians knew anything about it. In that moment of tragedy, Twitter became a human early warning system.
Over the next 24 hours, I closely followed the tweets of Dedric Lam, who lives in Shanghai, and who acted as a clearing house for a wide range of news reports, articles and videos about the earthquake. As major news organizations struggled to site reporters into the earthquake zone, I received a more consistent and more consistently accurate stream of news, directly from the people affected by the earthquake, via Twitter.
That’s interesting, and more important, it was completely unexpected. The folks who created Twitter thought they were creating a “microblogging” service – something where you’d be able to post short updates about your day. What we’ve turned it into – as we learn what it’s good for – is something completely different. Science fiction writer William Gibson once wrote, “The street finds its own use for things, uses the manufacturers never intended.” Twitter is a true street technology, and every day every one of its two hundred thousand core users find new ways to put its hyperconnective capabilities to work.
Twitter is how I came to know Stephen Collins. Stephen is one of the core Twitter users in Australia, a consultant, and power user of “social media”, as we’re now starting to call all these technologies of hyperconnectivity. He’s been tweeting for a year, and has used Twitter to both extend and reinforce his commercial and personal connections. I came to know of him soon after I got sucked into the Australian “Twitteratti”, and followed him, for he’s an individual who frequently makes keen observations.
On the same Monday that the Sichuan earthquake occurred, Stephen came to Sydney for the day, to speak at Interesting South, a local lecture series. Through Twitter, we arranged an afternoon coffee in the Strand Arcade, and chatted away amiably enough, griping about how people just aren’t “getting” social media.
Then he related an interesting story.
Stephen sends his 10 year-old daughter to Canberra’s St. Clare of Assisi Primary School, where she gets “the best education I can afford to give her,” as he wryly puts it. St. Clare of Assisi Primary is a big school – the largest private primary school in the ACT, at 730 kids. Never the passive parent, Stephen has grown progressively more involved in the affairs of St. Clare of Assisi, and found himself, this January – almost inadvertently – elected to the position of Secretary to the Board of the school.
Gah, you must be thinking: what a thankless task. Sit there and take notes at all the meetings. Dull as. And so it would have been, were Stephen a less inventive sort. Instead, during his first meeting, he had a penny-drop moment: rather than just writing up all these notes and sending out a sheaf of emails, he could type all of this information into a ‘wiki’ – that is, a user-editable website, and the technological basis for Wikipedia – so that everyone on the board could have access to his notes, make additional notes, start wiki entries on their own topics, and so on.
Wikis go hand-in-hand with hyperconnectivity: once we’re all connected together in a few dozen or few hundred million ways, we need someplace to pool our common wealth of resources, information, knowledge, and experience. Wikipedia is proof positive that everyone, everywhere, is expert in something, even something terrifically obscure – and it’s proof that someone else, somewhere else, will treasure that expertise.
When the administrators saw the PBwiki that Stephen set up, they were amazed and delighted. All of the hard yards of coordinating via emails could now be handled through a collaborative process, with a common tool accessible anywhere Internet connectivity could be had. “So now,” said the Head of School, “can we bring the staff up to speed on this? And the teachers? Can we get them to start planning their courses on this? And get the parents more involved? And what about the kids – can they use this too?”
With just one simple act – and really, an act that saved him work – Stephen introduced a new way of thinking and working to Canberra’s largest private primary school. It’s early days yet, but as they come to learn to use the wiki, discovering its strengths and weaknesses, it will begin to transform the way they teach. It is opening the way to a broader and more comprehensive revolution in education. This “accidental revolution” is a clear sign that the ground is fertile. Things are breaking through all over. All it takes is one person, in the right place, at the right time, with the right idea.
Which brings me back to all of you, here in this room, this morning. We are the change agents. All of us. We don’t have to leave here today with grand plans. Far from it. All we need to do is share with one another what we’ve learned along the way: what’s worked, what hasn’t, and why. We need to connect with one another – using all the tools at our disposal (and there’s a lot of them), and we need to put the new tools of knowledge sharing to work for us, pooling our own deep reservoirs of expertise, learning from each other as effectively as we can. If each of us can add one good idea – and I reckon each of us has at least one good idea – that means there are a lot of good ideas in this room. Just one of those can change the educational environment of a school. Stephen Collins’ story is proof of that.
For the rest of the day, I’m going to sit back and listen. Hard. I’m going to listen to all of the good ideas you folks have been working up as we all confront this huge challenge. When I hear an idea that strikes me, I’ll be blogging it – on Twitter. At the end of these four events, I’ll be able to go back and read my tweetstream, and see what really interested me. Perhaps it will interest you too. All the while, 660 other folks, all around the world, will be looking in. Some of them might get a good idea, something they want to share with us. We can and must use hyperconnectivity to increase our effectiveness. We can and must use knowledge sharing to increase our intelligence. We can crack this problem.
After all, we’ve been around the block. These wacky kids, they’re just getting started. They have the tools, but lack the wisdom to use them effectively. It’s up to us to teach them how. But first, we’ve got learn how to use them. That done, we can transform education, and transform their enormous capacity to learn. But, right now, the teachers must become students.
I’m waiting, with my pencil raised.
Sydney looks very little different from the city of Gough Whitlam’s day. Although almost forty years have passed, we see most of the same concrete monstrosities at the Big End of town, the same terrace houses in Surry Hills and Paddington, the same mile-after-mile of brick dwellings in the outer suburbs. Sydney has grown a bit around the edges, bumping up against the natural frontiers of our national parks, but, for a time-traveler, most things would appear nearly exactly the same.
That said, the life of the city is completely different. This is not because a different generation of Australians, from all corners of the world, inhabit the city. Rather, the city has acquired a rich inner life, an interiority which, though invisible to the eye, has become entirely pervasive, and completely dominates our perceptions. We walk the streets of the city, but we swim through an invisible ether of information. Just a decade ago we might have been said to have jumped through puddles of data, hopping from one to another as a five year-old might in a summer rainstorm. But the levels have constantly risen, in a curious echo of global warming, until, today, we must swim hard to stay afloat.
The individuals in our present-day Sydney stride the streets with divided attention, one eye scanning the scene before them, and another almost invariably fiddling with a mobile phone: sending a text, returning a call, using the GPS satellites to locate an address. Where, four decades ago, we might have kept a wary eye on passers-by, today we focus our attentions into the palms of our hands, playing with our toys. The least significant of these toys are the stand-alone entertainment devices; the iPods and their ilk, which provide a continuous soundtrack for our lives, and which insulate us from the undesired interruptions of the city. These are pleasant, but unimportant.
The devices which allow us to peer into and sail the etheric sea of data which surrounds us, these are the important toys. It’s already become an accepted fact that a man leaves the house with three things in his possession: his wallet, his keys, and his mobile. I have a particular pat-down I practice as the door to my flat closes behind me, a ritual of reassurance that tells me that yes, I am truly ready for the world. This behavioral transformation was already well underway when I first visited Sydney in 1997, and learned, from my friends’ actions, that mobile phones acted as a social lubricant. Dates could be made, rescheduled, or broken on the fly, effortlessly, without the painful social costs associated with standing someone up.
This was not a unique moment; it was simply the first in an ever-increasing series of transformations of human behavior, as the social accelerator of continuous communication became a broadly-accepted feature of civilization. The transition to frictionless social intercourse was quickly followed by a series of innovations which removed much of the friction from business and government. As individuals we must work with institutions and bureaucracies, but we have more ways to reach into them – and they, into us – than ever before. Businesses, in particular, realized that they could achieve both productivity gains and cost savings by leveraging the new facilities of communication. This relationship between commerce and the consumer produced an accelerating set of feedbacks which translated the very physical world of commerce into an enormous virtual edifice, one which sought every possible advantage of virtualization, striving to reach its customers through every conceivable mechanism.
Now, as we head into the winter of 2008, we live in a world where a seemingly stable physical environment is entirely overlaid and overweighed by a virtual world of connection and communication. The physical world has, in large part, lost its significance. It’s not that we’ve turned away from the physical world, but rather, that the meaning of the physical world is now derived from our interactions within the virtual world. The conversation we have, between ourselves, and with the institutions which serve us, frame the world around us. A bank is no longer an imposing edifice with marble columns, but an EFTPOS swipe or a statement displayed in a web browser. The city is no longer streets and buildings, but flows of people and information, each invisibly connected through pervasive wireless networks.
It is already a wireless world. That battle was fought and won years ago; truly, before anyone knew the battle had been joined, it was effectively over. We are as wedded to this world as to the physical world – perhaps even more so. The frontlines of development no longer concern themselves with the deployment of wireless communications, but rather with their increasing utility.
Utility has a value. How much is it worth to me to be able to tell a mate that I’m delayed in traffic and can’t make dinner on time? Is it worth a fifty-cent voice call, or a twenty-five cent text (which may go through several iterations, and, in the end, cost me more)? Clearly it is; we are willing to pay a steep price to keep our social relationships on an even keel. What about our business relationships? How much is it worth to be able to take a look at the sales brochure for a store before we enter it? How much is it worth to find it on a map, or get directions from where we are? How much is it worth to send an absolutely vital email to a business client?
These are the economics that have ruled the tariff structures of wireless communications, both here in Australia and in the rest of the world. Bandwidth, commonly thought of as a limited resource, must be paid for. Infrastructure must be paid for. Shareholders must receive a fair return on their investments. All of these points, while valid, do not tell the whole story. The tariff structure acts as a barrier to communication, a barrier which can only be crossed if the perceived value is greater than the costs incurred. In the situations outlined above, this is often the case, and is thus the basis for the wireless telelcomms industry. But there are other economics at work, and these economics dictate a revision to this monolithic ordering of business affairs.
Chris Anderson, the editor of WIRED magazine, has been writing a series of essays in preparation for the publication of his next book, Free: Why $0.00 is the Future of Business. In his first essay – published in WIRED magazine, of course – Anderson takes a look at Moore’s Law, which promises a two-fold decrease in transistor cost every eighteen months, a rule that’s proven continuously true since Intel co-founder Gordon Moore proposed it, back in 1965. Somewhere around 1973, Anderson notes, Carver Mead, the father of VLSI, realized that individual transistors were becoming so small and so cheap as to be essentially free. Yes, in aggregates of hundreds of millions, transistors cost a few tens of dollars. But at the level of single circuits, these transistors are free, and can be “wasted” to provide some additional functionality at essentially zero additional cost. When, toward the end of the 1970s, the semiconductor industry embraced Mead’s design methodology, the silicon revolution began in earnest, powered by ever-cheaper transistors that could, as far as the designer was concerned, be considered entirely expendable.
Google has followed a similar approach to profitability. Pouring hundreds of millions of dollars into a distributed, networked architecture which crawls and indexes the Web, Google provides its search engine for free, in the now-substantiated belief that something made freely available can still generate a very decent profit. Google designed its own, cheap computers, its own, cheap operating system, and fit these into its own, expensive data centers, linked together with relatively inexpensive bandwidth. Yahoo! and Microsoft – and Baidu and Facebook and MySpace – have followed similar paths to profitability. Make it free, and make money.
This seems counterintuitive, but herein is the difference between the physical and virtual worlds; the virtual world, insubstantial and pervasive, has its own economies of scale, which function very differently from the physical world. In the virtual world, the more a resource is shared, the more valuable it becomes, so ubiquity is the pathway to profitability.
We do not think of bandwidth as a virtual resource, one that can simply be burned. In Australia, we think of bandwidth as being an expensive and scarce resource. This is not true, and has never been particularly true. Over the time I’ve lived in this country (four and a half years) I’ve paid the same fixed amount for my internet bandwidth, yet today I have roughly six times the bandwidth, and seven times the download cap. Bandwidth is following the same curve as the transistor, because the cost of bandwidth is directly correlated to the cost of transistors.
Last year I upgraded to a 3G mobile handset, the Nokia N95, and immediately moved from GPRS speeds to HSDPA speeds – roughly 100x faster – but I am still spending the same amount for my mobile, on a monthly basis. I know that some Australian telcos see Vodafone’s tariff policy as sheer lunacy. But I reckon that Vodafone understands the economics of bandwidth. Vodafone understands that bandwidth is becoming free; the only way they can continue to benefit from my custom is if they continuously upgrade my service – just like my ISP.
Telco tariffs are predicated on the basic idea that spectrum is a limited resource. But spectrum is not a limited resource. Allocations are limited, yes, and licensed from the regulatory authorities for many millions of dollars a year. But spectrum itself is not in any wise limited. The 2.4 Ghz band is proof positive of this. Just that tiny slice of spectrum is responsible for more revenue than any other slice of spectrum, outside of the GSM and 3G bands. Why is this? Because the 2.4 Ghz band is unregulated, engineers and designers have had to teach their varied devices to play well with one another, even in hostile environments. I can use a Bluetooth headset right next to my WiFi-enabled MacBook, and never experience any problems, because these devices use spread-spectrum and spectrum-hopping to behave politely. My N95 can use WiFi and Bluetooth networking simultaneously – yet there’s never interference.
Unlicensed spectrum is not anarchy. It is an invitation to innovate. It is an open door to the creative engines of the economy. It is the most vital part of the entire wireless world, because it is the corner of the wireless world where bandwidth already is free.
And so back to the city outside the convention center walls, crowded with four million people, each eagerly engaged in their own acts of communication. Yet these moments are bounded by an awareness of the costs of this communication. These tariffs act as a fundamental brake on the productivity of the Australian economy. They fetter the means of production. And so they must go.
I do not mean that we should nationalize the telcos – we’ve already been there – but rather, that we must engage in creating a new generation of untarriffed networks. The technology is already in place. We have cheap and durable mesh routers, such as the Open-Mesh and the Meraki, which can be dropped almost anywhere, powered by sun or by mains, and can create a network that spans nearly a quarter kilometer square. We can connect these access points to our wired networks, and share some small portion of our every-increasing bandwidth wealth with the public at large, so that no matter where they are in this city – or in this nation – they can access the wireless world. And we can secure these networks to prevent fraud and abuse.
Such systems already exist. In the past eight months, Meraki has given their $50 WiFi mesh routers to any San Franciscan willing to donate some of their ever-cheaper bandwidth to a freely available municipal network. When I started tracking the network, it had barely five thousand users. Today, it has over seventy thousand – that’s about one-tenth of the city. San Francisco is a city of hills and low buildings – it’s hard to get real reach from a wireless signal. In Sydney, Melbourne, Adelaide, Brisbane and Perth – which are all built on flats – a little signal goes a long, long way. From my flat in Surry Hills I can cover my entire neighborhood. If another of my neighbors decides to contribute, we can create a mesh which reaches further into my neighborhood, where it can link up with another volunteer, further in the neighborhood, and so on, and so on, until the entirety of my suburb is bathed in freely available wireless connectivity.
While this may sound like a noble idea, that is not the reason it is a good idea. Free wireless is a good idea because it enables an entirely new level of services, which would not, because of tariffs, make economic sense. This type of information has value – perhaps great value, to some – but no direct economic value. This is where the true strength of free wireless shows itself: it enables a broad participation in the electronic life of the city by all participants – individuals, businesses, and institutions – without the restraint of economic trade-offs.
This unlicensed participation has no form as yet, because we haven’t deployed the free wireless network beyond a few select spots in Australia’s cities. But, once the network has been deployed, some enterprising person will develop the “killer app” for this network, something so unexpected, yet so useful, that it immediately becomes apparent that the network is an incredibly valuable resource, one which will improve human connectivity, business productivity, and the delivery of services. Something that, once established, will be seen as an absolutely necessary feature in the life of the city.
Businessmen hate to deal in intangibles, or wild-eyed “science projects.” So instead, let me present you with a fait accompli: This is happening. We’re reaching a critical mass of Wifi devices in our dense urban cores. Translating these devices into nodes within city-spanning mesh networks requires only a simple software upgrade. It doesn’t require a hardware build-out. The transformation, when it comes, will happen suddenly and completely, and it will change the way we view the city.
The question then, is simple: are you going to wait for this day, or are you going to help it along? It could be slowed down, fettered by lawsuits and regulation. Or it could be accelerated into inevitability. We’re at a transition point now, between the tariffed networks we have lived with for the last decade, and the new, free networks, which are organically popping up in Australia and throughout the world. Both networks will co-exist; a free network actually increases the utility of a tariffed mobile network.
So, do you want to fight it? Or do you want to switch it on?
I. The Wheels Fall Off the Cart
In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.
This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.
Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.
Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.
In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.
This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.
In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.
Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.
Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.
It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.
Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.
At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.
BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.
That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.
It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.
In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.
But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.
II. The Barbarian Hordes Storm the Walls
Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.
Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.
This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.
In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.
Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.
For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.
This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.
That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”
When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.
There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.
This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.
The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.
What then, is to be done?
III. And The Penny Drops
It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.
This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.
In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.
At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.
Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.
The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.
Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.
I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.
Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.
What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.
None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.
So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.
One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.
The adoption of mobile phones by fishermen and wholesalers was associated with a dramatic reduction in price dispersion, the complete elimination of waste, and near-perfect adherence to the Law of One Price. Both consumer and producer welfare increased.
The middle is by no means an average; on the contrary, it is where things pick up speed. Between things does not designate a localizable relation going from one thing to another and back again, but a perpendicular direction, a transversal movement that sweeps one and the other away, a stream without beginning or end that undermines its banks and picks up speed in the middle.
- Wikipeida vs Britannica: the “crowdsourced” encyclopedia is now, on average, at least as accurate as the hierarchically produced, peer-reviewed production, and covers a far greater breadth of subject material than Britannica.
- Television and film distribution: since the advent of Napster in 1999, all attempts to control the distribution of media have met with increasing resistance. The audience now moves to circumvent any copy-restrictions as soon as they are introduced by copyright holders.
- Politics: The Attorney General of the United States of America resigned last week, because of the efforts of a few, very dedicated bloggers.
Few terms convey less meaning than “futurist.” What exactly is a futurist? What does he do? The definition, so far as I chose to apply it, is simple: a futurist looks at the present, at human behavior and human tendencies, to imagine how these trends develop. This is less science than storytelling; the development of any human endeavor is fraught with non-linear events, which yank the arrow of the progress this way and that. One can never know the future with any precision, and the farther the future recedes down the light-cone, the less distinct it becomes. We might know with high accuracy what will happen tomorrow. But five years from now, or twenty? That’s more alchemy than anthropology.
Yet, in order to play the game, futurists must make predictions. It’s what we do. So, for those few futurists who are willing to take the big risks of making short-term predictions – ranging from twelve to thirty-six months in the future – the game is particularly dangerous. Any futurist can predict what will come to pass in twenty years’ time, because no one will remember how wrong they were. But to make a prediction for the near term risks being revealed as a charlatan. Such predictions must be considered carefully, revealed hesitantly, and pronounced provisionally. Doing that will give you an out later on. Yet I have never been one to be either hesitant or provisional; I leap in where braver (and, arguably wiser) souls fear to tread. My particular brand of futurism – the “futurest” – is expansive, encompassing, and uncompromisingly revolutionary. I say this not to tout my strengths, but rather, to reveal the dangers.
In the early 1990s I predicted that VR would become the standard interface metaphor for computers by the 21st century. Did I get that right? It seems not; after all, we still use windows and mice as standard the interaction paradigm, just as we did back in 1990. Yet, if we can draw anything from the recent and somewhat surprisingly successful introduction of the Nintendo Wii, it’s that VR did arrive, is pervasive, and has become a dominant interface metaphor. Just not on the computer desktop. VR isn’t about head-mounted displays, although it might have seemed so, fifteen years ago. VR is about bringing the body into contact with the simulated world. Nintendo, with its clever, cheap, attractive and highly functional Wiimote, has done just that. They’ve done what decades of other researchers and engineers failed to do: they’ve brought us into the game. So predictions might come to pass, but rarely do they come in the form imagined. But every so often, when you step up to the plate, you connect completely, and knock one out of the park.
In early December 2005 I was invited to give a plenary presentation to the Australian conference on Interaction and Entertainment Design. This was one of the rare opportunities I get to talk on any subject I desired. Most of these academics wanted to talk about the latest trends in gaming and online communities; having been through that, and more, a decade ago, I decided to take the conversation in an entirely different direction, by focusing on that most common of our electronic peripherals, the mobile phone.
So common as to be nearly invisible, the mobile phone has become the focal point of our social existence. Yet, despite its constant presence, the mobile seemed poorly fit to the task of being our perpetual servant. It seemed stuck in an liminal position, between the wired world and the pervasive networked environment which is the global reality of the 21st century. The mobile was broken, and needed to be fixed. Hence, working with Angus Fraser, my graduate student – who, on his own, has had years of experience developing interfaces and applications for mobile phones – I wrote “The Telephone Repair Handbook”. I started off by challenging the audience to answer three questions:
- Q: Why does a mobile phone have a keypad? We never use it.
A: Because wired phones have keypads. And so we can enter text. Badly.
- Q: How many networks are our mobile phones really connected to?
A: The answer is generally at least three: GPRS/GSM, Bluetooth and IrDA.
- Q: What are our phones doing all the time they’re idle?
A: Nothing. They’re just waiting for a phone call or a text message to make their day.
These basic failures in the design of the mobile phone, I argued, arose from our fundamental misunderstanding of the function of the device. Mobile phones are not simply passive terminals, waiting to be activated. They are (or rather, should be) active communications processors, managing the minutiae of our social relationships.
Once I’d set up the straw men, and knocked them down, I described a new kind of mobile phone, designed from the outset to be a communications servant, a nexus which tracked, facilitated and recorded all of the social interactions happening through it, or, via Bluetooth, proximal to it. And, because I can code, I demonstrated the very first version of Blue States, a small Java J2ME application which allowed mobile phones to note and record the presence of other Bluetooth devices in their immediate proximity. This information, I insisted, could be come the foundation of an emergent social network. The mobile, at all times with you, or nearby, knows your social life better than you do. When exposed, and analyzed, this data becomes a powerful tool. Angus and I worked up a few user scenarios to demonstrate our point: the mobile can be so much more. All it needs is the right software. I finished by encouraging this room of researchers to re-invent the mobile phone, to make it the digital social secretary, the majordomo, and grand vizier.
A month after I gave that presentation, I left my teaching position, and began coding, full-time, on Blue States, readying it for its first deployment, at ISEA San Jose. As an art project at an art festival, it might influence the creative minds of electronic artists. Perhaps they would begin to pervert their own mobile phones, transforming them into something entirely more useful.
As it turns out, I didn’t have to wait for the artists to catch up with me. For it seems that even as I was beginning my research work, more than two years ago, and formulating my theories on the future of the mobile telephone, another group of researchers set to the same task, and came to many of the same conclusions.
Yesterday, on a stage in San Francisco, Steve Jobs, CEO of the now-renamed Apple, Inc., introduced the iPhone, Apple’s much-rumored and long-awaited convergence device. Three things must be noted as essential to the design:
- It has no keyboard.
- It is connected to wireless internet, Bluetooth and GPRS/EDGE networks simultaneously, and moves between each seamlessly.
- It has a sophisticated operating system, and is constantly executing several tasks at once. It is never truly idle.
The iPhone is a combination of an iPod and a mobile telephone, and these elements have been fused together with a fingertip-based user interface to make the device nearly as tactile and natural as any familiar object. It is a mobile phone, but it has – finally and rationally – lost its vestigial connections to the wired phone. It is not simply a wireless phone; it is a network terminal, with all that implies. That it has a true operating system – instead of the “toy” operating systems of earlier mobile phones, which are cranky, and which crash all too often – means that programmers can harness the capabilities of the device wholly, taking it into directions that its creators at Apple never intended. This is not simply an iPod, or a mobile phone, but a complete redefinition of the device. This, quite simply, is the future, as I predicted it, thirteen months ago.
Will the iPhone succeed? No one yet knows. The device is both new enough and different enough that significant changes in user behavior must follow in its wake. Like the Macintosh with its Graphical User Interface, this transformation might take a decade to become the dominant interaction paradigm. Or – given the level of hype and excitement seen in the media in the last twenty-four hours – it might be the right device, at the right time. It may be that Apple has told the world not only why the telephone must be reinvented, but has show it how it should be done. If they have, the iPhone will make the iPod look like a weak overture. Copies and clones will proliferate, skirting to the edge of every one of Apple’s two hundred iPhone patents. And people will begin to have very different expectations for their mobile phones.
While the iPhone both excites and dazzles me with its ingenuity, design and inventiveness, I am not completely satisfied with it. It is still a phone, an iPod, and an “internet communicator” rolled into one. It is not, in any true sense, wholly integrated. There is no way for my friends in San Francisco, with their iPhones, to know what my favorite songs are, or what I’m listening to at the moment, or what I’m reading on the web, or who I’m texting. It is halfway to the social device which I see as the inevitable end point. But the rest is just software. The hardware platform is there, ready and waiting, and will be disrupted by a dozen innovations that no one can yet predict. But I do predict they will happen, in the next twelve to thirty-six months.
I. Everything Must Go!
It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.
The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.
On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.
II. The Vanilla Republic
It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.
When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.
In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.
This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.
III. My Left Eyeball
Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.
Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.
While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.
I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.
Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.
The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.
One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.
Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.
I. The Story So Far
Everything is changing. Everything has changed. Everything always changes, but at times that change is particularly pronounced and thus specifically noteworthy. For media – which is the topic du jour – this is so plainly obvious that any attempt to refer to the “before” time has an almost archeological feel, as though we must shovel carefully through layers of dirt to uncover how media worked just a few year ago. These transformations have been seismic, and singular. There is no going back.
But what, exactly, has happened?
The revolution we glimpsed in 1994, when the rough beast of the Web, its hour come at last, made the earth tremble, seducing and subsuming us into its ever-broadening expanse, fell back, for a brief while, into patterns more established and more familiar. We glimpsed a utopia; then a fog rose, and the vision faded. We endured half a decade of stupidity, cupidity and the slow strangulation of dreams. We longed for communion; we got DVD players delivered in under an hour. Fortunately, the network accelerates everything it embraces, and what might have taken a generation in earlier times took just five years to run its course, from Netscape to Razorfish, and the lunar crater of NASDAQ seemed to spell the final doom of all our hopes. The Web, people loudly proclaimed, was so over.
During those first five years, we learned just how different network economics could be; not just in theory, but in practice. We learned that the essence of the digital artifact is that it exists to be copied. Like a gene in the Cambrian seas of the early Web, information was copied and recopied endlessly. John Perry Barlow’s Declaration of the Independence of Cyberspace was one of the first such objects, spread via email and website until it became nearly impossible to ignore. More recently, Cory Doctorow’s lecture on DRM for Microsoft Research – in text, Pig Latin and video versions – has been passed around like a cheap two-dollar…well, you know. Each of these digital artifacts eventually reached nearly every single individual who might find them interesting, because, as they were copied and read, forwarded and linked to, each of the human nodes in this network made a decision that this information was important enough to share. In the networked era, salience is the only significant quality of information. For that reason, it was only a matter of time until the technologies of the network would reinforce this natural tendency, and accelerate it.
So even as the Web died, it was reborn. The top-down design of a hundred centralized sources of information evolved into seven hundred million peers. From each according to their ability, to each according to their need. Feeds replaced websites, and torrents replaced streams. The revolution we had fleetingly glimpsed had finally – blessedly – arrived.
But one man’s blessing is another’s curse.
The network revolution presented incredible opportunities to anyone working in the media industries. Suddenly, it became possible to reach massive audiences, unbounded by proximity. But instead of reinforcing the previous structures of media ownership and information distribution, the network has consistently undermined them. Mention Craigslist to a newspaperman, and watch as the color drains from their face. Casually drop BitTorrent into a conversation with a studio executive, and observe as they choke back their rage. The network carries within it the seeds of their destruction. And they’re absolutely, utterly, completely powerless to stop it.
This would be a sad story if professional media had not willingly cooperated in their own demise. The technologies of the digital era were simply too tempting to be ignored, too important to the bottom line. But the network has its own economics, and quickly overcomes or blithely ignores any attempt to subvert its innate qualities. Film studios make the majority of revenues from DVD distribution of their productions, but that same DVD, because of its essentially digital nature, can be copied and recopied endlessly, at no cost. If it is salient, it will be copied widely. That’s not just a horror story: that’s the law.
And if you don’t want your film copied? Well then, you have to resort to antique production technique. Make sure it’s shot to film stock, physically edited (good luck finding an editor who prefers a Steenbeck to an Avid) and graded – with no digital intermediates – then projected in an exhibition space where every audience member has been subjected to a humiliating physical search of their bodies. If you did that, you’d kill piracy. Probably. Of course, you’d also kill your exhibition revenues. But the studios (and the record companies, and the broadcasters, and the book publishers) want to have it both ways, want the benefits of digital distribution, all the while denying the essential quality of the medium – it exists to be copied.
But all of this noise about the approaching end of copyright obscures a more salient point: the barriers of distribution have utterly collapsed. Anyone can send anything to everyone, everywhere, at little or no cost. The tribulations of the largest of the professional media producers are simply the canary in the coal mine; they’re the most sensitive to the economics of a distribution system that has kept them alive and well-fed for a hundred years. Now that those economics have irrevocably changed, the entire business of professional media production is threatened.
II. Stupid is the New Black
Behold: I bring tidings of the second dot-com boom. Stupid is back!
That, at least, is the message from a hundred insta-pundits, on the business pages of newspapers, in blogs, and countless analysts’ reports. The entire world seemed shocked by the entirely expected purchase of video-sharing site YouTube by Google for 1.65 billion dollars. It’s a bad deal, some say, doomed to fail. It isn’t worth it. It’ll bring Google crashing back to earth with endless litigation from the copyright holders who have just been waiting for someone with deep enough pockets to sue.
What most everyone overlooked – as it happened the very same day as the Google purchase – were the licensing agreements YouTube struck with Universal, Sony BMG, and CBS. Together with their earlier deal with Warner, YouTube now has a deal with every major music publisher in the world. YouTube will now figure out how to share the revenues it will be generating with Google’s advertising technology with all of the copyright holders whose materials end up on YouTube.
Some pundits – most notably, Mark Cuban – have indicated that only a moron would buy YouTube, because it’s widely believed that YouTube has built its business entirely upon the violation of copyright. Certainly, YouTube established its reputation with a specific piece of video owned by someone else – a digital short from NBC’s Saturday Night Live, “Chronic Sunday.” That video – viewed millions of times before NBC rattled its legal saber and the content was removed – introduced most users to YouTube. In the year since “Chronic Sunday,” YouTube has become a clearing house for the funniest bits of video content produced by other companies, from segments of The Daily Show with Jon Stewart, to South Park, to Family Guy to The Simpsons. Why has YouTube become the redistributors of these clips? Because none of the copyright holders made an effort to distribute these clips themselves. YouTube has been acting as an arbitrageur of media, equalizing an inequity in the market place – and getting very rich in the process. It may be copyright violation, but the power of the audience is far, far greater than the power of the copyright holder. YouTube could delete every clip uploaded in violation of copyright – to some degree they do – but if you have a few thousand people uploading the same clip, how do you stay ahead of that? Even YouTube itself is subject to the power of its audience. And if they become draconian in their enforcement of copyright – which is a possible outcome of the Google purchase – they will simply force the audience elsewhere, to other sites. Better by far to strike a deal with the copyright holders, so that they receive recompense for their efforts. NBC has started to distribute Saturday Night Live’s digital shorts on its own website; ABC and FOX offer full streaming versions of their programs; everyone is queuing up to sell their TV shows on iTunes. Is this a willing transition? Probably not. Minutes spent in front of the computer are minutes lost to television ratings. But if the copyright holders don’t distribute their content as widely as possible, someone else will. YouTube has proven this point beyond all argument.
Cuban believes that YouTube will die without a steady stream of content uploaded in violation of copyright. But if recent history is any guide, the studios are now falling over each other in their eagerness to do a deal, and share some of that money. The simultaneity of the Google purchase and the YouTube deals with the recording industry are not accidental; they’re indicative of a great sea-change. Big media has swallowed the bitter pill, and realized that they’ve lost control of distribution. Now they’ll try to make money off of it.
But Cuban makes another, and more damning point: he says that no one wants to watch the little hand-made videos which make up the vast majority of uploads to YouTube. This is the Big Lie of Big Media: if it isn’t professionally produced, the audience won’t watch it. No statement could be more mendacious, no assertion could be further from the truth. As a film producer and broadcaster, Cuban certainly hopes that audiences will always prefer professional content to amateur productions, but there’s no evidence to support this position – and rather a lot which counters it. The success of Red versus Blue, Homestar Runner, Happy Tree Friends, and The Show with Zefrank – each of which command audiences in the hundreds of thousands to millions – prove that audiences will find the content which interests them, and share that content with their friends, using the hyperdistribution techniques enabled by the network that ensure these audiences can get what they want – from anyone, anywhere, at any time – with a minimum of difficulty. These productions lie completely outside the bounds of “professional” media; they are “amateur,” not in the sense of raw, or poorly produced, but because they have turned their back on the antique systems of distribution which previously separated the big boys from the wannabes.
A perfect example of this transition can be seen in a video on YouTube by the Australian band Sick Puppies. Shot by the band’s drummer, it features a well-known character, Juan Mann, who inhabits Sydney’s Pitt Street mall, bearing a sign reading “Free Hugs.” The band befriended this unlikely character, and shot hours of video of him at work, giving free hugs to passers-by. While in Los Angeles, pursuing a recording deal, the drummer cut his footage into a three minute film, then added one the band’s song “All The Same” as a temp track. Thinking to share his work around, he uploaded the video to YouTube on the 26th of September, and told his friends. Who told their friends. Who told their friends. YouTube is particularly good at “viral” distribution of media – it’s the one thing they’ve gotten absolutely right – so, within three week’s time, that little hand-made video had been viewed well over three million times. Sick Puppies are now on the map; their music video has given them a worldwide fan base. A debut album on a major label – expected early next year – will complete their transformation from amateurs to professionals.
Salience determines whether an audience will gather around and share media, not production values. In the time before hyperdistribution, audiences had a severely limited pool of choices, all of them professionally produced; now the gates have come down, and audiences are free to make their own choices. When placed head-to-head, can a professional production of modest salience stand up against an amateur production of great salience? Absolutely not. The audience will always select the production which speaks to them most directly. Media is a form of language, and we always favor our mother tongue.
The future for YouTube lies with the amateurs, not with the professionals. Cuban misses the point entirely, assuming that the audience will behave as it always has. But this is not that audience; this is an audience which has essentially infinite choice, and has come to understand that the sharing of media is an act of production in itself – that we are all our own broadcasters.
And you’d have to be a moron to miss that.
III. The Epidemiology of Cool
We know why YouTube has had such an incredible string of successes; the site makes it easy to share a video with your friends, and for those friends to share that videos with their friends, and so on. The marketers call this “viral distribution,” but we know it by another and rather more prosaic name – friendship. As an inherently social species, we are constantly reinforcing the our social connections through communication. It could be an IM, a text message, an email, a phone call, or a video – it’s all the same to the enormous section of our forebrains that we use to process the intricacies of our social relationships. We share these things to tell our friends that we’re thinking of them – and, rather more competitively, to show our friends that we’re on the tip. Each of us are coolfinders (some of us do it professionally), and we each keep a little internal thermometer which measures our own cool against that of our peers. That innate drive to be recognized for our tastes has been accelerated to the speed of light by the network. Now, even as we coolfind, we are constantly inundated and challenged by the coolfinding of our peers. It’s produced a very healthy, if ultra-Darwinian, ecology of cool. Our peers are the selection pressure as we struggle to pass our memes on to the next generation.
Thus far, we’ve done this on our own, with very little assistance from the wealth of computing machinery which crowds our lives. We create ad-hoc solutions for media distribution: mailing lists, websites, podcasts – each of these an attempt to spread our ideas more successfully. But they’re held together tenuously, only by our constant activity, busy bees maintaining the cells of our hive. And it’s a lot of work. We’re forced to do it – forced to run the race, lest we be overrun by the memes of others – but we’ve reached the one practical limit: time. No one has enough time in the day to keep up with all of the information we should be absorbing. We can filter ruthlessly – and perhaps miss out on something we’ll regret later – or declare email bankruptcy, like Lawrence Lessig, or just withdraw to an ever-more-specialized domain of coolfinding. And we are doing each of these things, every day, under the pressure of all this information.
There’s got to be a better way.
In the early years of the 19th century, farmers in western Pennsylvania kept their wagon wheels greased with puddles of bubbling muck that studded the countryside. Although useful, the puddles were a toxic nuisance to livestock. If the farmers could have rid their lands of these puddles, they likely would have. A half a century later, western Pennsylvania became a boomtown, built on its substantial petroleum reserves. The bubbling muck had immense value – but it had to wait for the demands of the kerosene lamp and the internal combustion engine.
In the early years of the 21st century, we each generate an enormous amount of interaction data – every click on a computer, every email sent or received, every website visited, every text message, every phone call, every swipe of a credit card or loyalty card or debit card, every face-to-face interaction. None of it is recorded – or at least, it’s not recorded by any of us, for any of us (though the NSA has expressed some interest in it) – because it hasn’t been seen as valuable. It’s bubbling up through all of us, and around all of us, as we create data shadows that have grown longer and longer, resembling Jacob Marley’s lockboxes and chains, rattling throughout cyberspace.
All of that information is worth more than oil, more than gold. And all of it is sadly – almost obscenely – dropped on the floor as soon as it is created. If we’re lucky, it is deleted. If we’re unlucky, someone uses it to create a digital simulacrum, and we find our identities hijacked. But in no case is this information ever exposed to us, for our own use. We’re told it has no value to us, and – so far – we’ve been stupid enough to believe it.
But now, just now, economic forces are linking the persistence of our data shadows to our ability to filter the avalanche of information which characterizes life in the 21st century. Turns out this data guck is good for more than greasing the wheels of commerce. These data shadow glow with the evanescent echo of our real social networks – not the baby steps of MySpace and Friendster – but the real ground-truth interactions which reveal ourselves and our relations one to another. It is human metadata. And it is the most valuable thing we’ve got, now that there’s demand for it.
YouTube records every email address you use to forward a video to a friend. It uses these, at present, to do auto-completion of addresses as you type them in. It also presents a friendly list of these addresses, to make forwarding all that much easier. What they’re not doing – at least, not visibly, and very likely not at all – is keeping any record of what I sent to whom, nor when, nor why. Yet every video forwarded through YouTube is forwarded for a reason – salience. YouTube could record those moments of salience, could use them to build a model, a data shadow, which could reinforce your own ability to make decisions about who should see what. It might even, to some degree, automate that process. When you add to this the newly emerging capabilities of analytic folksonomy – comparing a user’s tag clouds against the tag clouds of others within their social network – certain other relationships and affinities emerge. Again, these relationships can be used to improve the capability of the system to help find, filter and forward relevant videos. This is how a social network really works. It’s not about having 500 first-degree friends in MySpace. It’s about listening to your naturally occurring social network to direct, improve, and accelerate information flow. When the brand-new power of the individual as broadcaster is reified by the capabilities of computing machinery to listen to and model our interactions, the result is hypercasting. This is what media distribution in the 21st century is inevitably hurtling toward, driven by the natural selection of steadily increasing informational pressure.
Hypercasting solves some lingering questions confronting us. The first and most important of these is: How will we figure out what to watch now that we’ve got a near-to-infinite set of choices? We’ll rely on the recommendation of our friends, as we always have, but now these recommendations will be backed up by a hypercasting system which will invisibly and pervasively keep track our interests, the points of interest we hold in common with our friends, our communities, our families, and our co-workers. It will not be automatic – no one really wants to see some out-of-control hypercasting system deluge us with video spam – but it will be so tightly integrated into our interactive experiences that it will barely register on our perceptions. We’ll simply come to expect that our iPods, our Media Centers, our PSPs and our mobiles are loaded up and ready for us, with things we’re sure to find compelling. Addiction to television will soar to new highs, a new crop of amateurs – millions of them – will find successful and lucrative careers in media production, and advertisers, as always, will find a way to spread their messages. On the surface, things will look much as they do now, but everything will move at a more rapid clip. Videos will fly across the world in seconds, not days, and a global audience of a million will gather in moments. Almost accidentally, this will change news reporting forever, as citizen journalism becomes a real threat to established media companies, and their utter undoing. Shouldn’t the New York Times be subject to the same pressures as NEWS Corporation?
Is YouTube the harbinger of the transition to hypercasting? The lead is theirs to lose. GooTube delivers over half of all videos seen on the Internet. They have the cash and the brainpower to transform broadcasting into hypercasting. And they have to worry about the next set of 20-somethings, in a garage, working on the Next Big Thing. Those kids, nurtured by YouTube, know just what’s wrong with it, and how to make it better. YouTube faces its own selection pressures, which will only increase as it grows exponentially and cuts content deals and just tries to keep the whole centralized mess up and running.
Yet it doesn’t matter. We have seen birth and death, and thought they were different. But the death of the Web brought a new kind of life, a vitality and surefootedness suppressed during the years of MBAs and crazy business plans and IPOs. Perhaps history is repeating itself, as everyone goes wild with another case of gold fever, and we’ll lose the plot again. In that case, we should be glad of another death.
Hypercasting might need to wait a few years, for a platform very much like a fully mature Democracy DTV – or something we haven’t even dreamt up. It may be that YouTube will disappoint. But that doesn’t mean anything at all. YouTube isn’t driving the evolution toward hypercasting. The audience is. And the audience – in its teeming, active, probing billions – always gets whatever it wants. That’s the first rule of show business.
Although Apple introduced its Video iPod at the end of 2005, this is the year when video begins to take off. Everywhere. The sheer profusion of devices which can play video – from iPods to desktop and laptop computers to Sony’s Playstation Portable, the Nintendo DS, and nearly all current-generation mobile phones – means that people will be watching more video, in more places, than ever before. You may not want to watch that episode of “Desperate Housewives” on your iPod – unless you happened to be tied up last Monday evening, and forgot to program your VCR. Then you’ll be glad you can. Sure, the picture is small and grainy, the sound’s a bit tinny, and your arms will get tired holding that screen in front of your face for an hour, but these drawbacks mean nothing to a true fan. And the true fans will lead this revolution.
We’re growing comfortable with the idea that screens are everywhere, that we can – in the time it takes to ride the train to work – get caught up on our favorite stories, the last World Cup match, and the news of the world. A generation ago it seemed odd to see someone in public wearing earphones; today it’s a matter of course. This afternoon it might seem odd to see someone staring into their mobile phone; tomorrow it will seem perfectly normal.
Now that video is everywhere, it won’t be long until the business of television moves online. Already, Apple has sold close to ten million episodes of television series like “Lost” and “The Office”. Google wants to sell you episodes of the original “Star Trek”, “The Brady Bunch” and “CSI”. For television producers it’s a win-win; they’ve already sold the episodes to broadcast networks – generally for a bit less than they cost to make – so the online sales are extra and vital dollars to cover the gap between loss and profit.
Today only a few of the hundreds of series shown in the US, UK and Australia are available for sale online. By the end of this year, most of them will be. Will the broadcast networks like this? Yes and no. It deprives them of some of the power they hold over the audience – to gather them at one place and time, eyeballs for advertisers – but it also creates new audiences: people see an episode online, and decide to tune in for the next one. That’s something we’ve already seen – “The Office”, for example, spiked upward in broadcast ratings after it was offered online. This year, there’s likely to be another breakout television hit – a new “Lost” – which starts its life online.
Once video is everywhere, once all our favorite television shows are available online for download, we’ll learn something else: there’s a lot more out there than just those shows produced for broadcast. On sites like Google Video and YouTube, you can already download tens of thousands of short- and full-length television programs. Some of them are woefully amateur productions, the kind that make you cringe in horror, but others – and there are more and more of these – are as funny and dramatic as anything you might see on broadcast television. Think TropFest – but a thousand times bigger.
Once we get used to the idea that television is something they can download, we’ll find ourselves drawn to these other, more unusual offerings. Most of this fare isn’t ready for prime-time. Much of it is only meant for a tight circle of friends and aficionados. But some of it will break through, and get audiences in the millions. It’s already happened a few times in the last year; this year it will become so common that, by the end of 2006, we’ll think nothing of it at all. This thought scares both the broadcast networks and the commercial TV producers. After all, if we’re spending our time watching something created by four kids in Goulburn, that’s time we’re not watching commercially-produced entertainment. And how do the networks compete with that?
This fundamental transformation in how we find and watch entertainment isn’t confined to video. It’s happening to all other media, simultaneously. More people listen to the podcasts of Radio National than listen to the live broadcast; more people read the Sydney Morning Herald online than read the print edition. And these are just the professional offerings. As with television, each of these media are facing a rising sea of competition – from amateurs. Apple offers tens of thousands of podcasts through its iTunes Music Store – including Radio National – on just about any topic under the sun, from the mundane to the truly bizarre. You can get “feeds” of news from Fairfax – headlines and links to online versions of the stories – but you can also get that any of several thousand news-oriented blogs. Click a few buttons and the news is automatically downloaded to your computer, every half hour.
As it gets easier and easier for us to choose exactly what we want to watch, hear and read, the commercial and national broadcasters find themselves facing the “death of a thousand cuts.” Every pair of ears listening to a podcast is an audience member who won’t show up in the ratings. Every subscriber to an “amateur” news feed is a subscriber lost to a newspaper. And this trend is just beginning. In another decade, we’ll wonder how we lived without all this choice.
Choice is a beautiful thing. We define ourselves by the choices we make: what we do, who we know, what we fill our leisure time with. Now that our media is everywhere, available from everyone, any hour of the day or night, we’re going to find ourselves confronted by an unexpected problem: rather than trying to decide what to watch on five terrestrial broadcast channels – or fifty cable channels – we’ll have to pick from an ocean of a million different programs; even if most of them aren’t all that appealing, at least a few thousand will be, at any point in time.
That kind of choice will make us all a little bit crazy, because we’ll always be wondering if, just now, something better isn’t out there, waiting for us to download it. Like the channel surfer who sits, remote in hand, flipping through the channels, hoping for something to catch his eye, we’re going to be flipping through hundreds of thousands and then millions of choices of things to watch, hear and read. We’re going to be drowning in possibilities. And the pressure – to keep up, to be informed, to be on the tip – is about to create the most savvy generation of media consumers the world has ever seen.
We’re drowning in choice, but, because of that, we’ll figure out how to share what we know about what’s good. We already receive lots of email from friends and family with links to the best things they’ve found online. That’s going to continue, and accelerate; our circles of friends are becoming our TV programmers, our radio DJs, our newspaper editors, and we’ll return the favor. The media of the 21st century are created by us, edited by us, and broadcast by us. That’s a deep change, and a permanent one.
Consider the lowly VCR. Once the king of the consumer electronics roost, the Japanese giant Matsushita has stopped manufacturing them in favor of DVD players. Unless they’re combined with a DVD player, most people have stopped buying them. I haven’t bought one in Australia, despite the fact that I need one for work, because I am regularly given video briefs for review, inventions to be presented on THE NEW INVENTORS. But somehow I can’t bring myself to spend the $100 on a VCR. Is that because I’m cheap? Hardly. It’s because I think VCRs suck – and I’m sure most of you would agree. They’re low-resolution, finicky, and nearly impossible to program. Yet, despite all these obvious drawbacks, VCRs changed the world.
In the time before the VCR, the television set was nothing more than a radio-wave tuner connected to an analog monitor. The television could only show programs as they were broadcast. Nothing else. Suddenly, the VCR enabled people to record broadcasts for later playback, or play pre-recorded cassettes. The VCR introduced the concept of “time-shifting” (though that term didn’t emerge until quite recently), and freed the audience from the hegemony of the broadcaster. This was such a catastrophic change that court battles were fought over it: the United States Supreme Court, ruling in the Sony “Betamax” decision, allowed that the VCR could be sold legally, even though time-shifting a television program constituted a violation of copyright – and still does, here in Australia. (The legal status of time-shifting in the United States is vague.)
While time-shifting moved power away from the broadcasters and into the audience, it also created a huge market for pre-recorded entertainment. Theatrical release provided one hundred percent of studio revenues in 1954. By 2004, that figure was down to 15%. It seems that audience choice is good economics; when you empower audience viewing habits, you dramatically increase the overall market.
By the late-1980s, as the studios saw incredible revenues flow in from pre-recorded videocassettes, they got together to promote a format which would have all of the advantages of the VCR, with none of its disadvantages. This format would provide a near-cinema-quality experience, but would be a read-only format. Consumers would be given greater choice, but only from a pre-produced collection of offerings. DVD, like the VCR before it, has become another biggest success story in consumer electronics. At least 75% of all households in Australia have at least one DVD player, and they’re now standard equipment on nearly all personal computers. The studios earn more – often far more – from DVD sales than from the theatrical release of their motion pictures. The DVD has driven the VCR out of the living room, just as the CD player obsolesced the turntable, fifteen years ago.
Nothing comes for free. The qualities that made the VCR, and the vinyl album before it, so annoying (noise, scratches, and just entropy in general) are the same qualities which made it a “safe” medium, so far as copyright protection was concerned. When the music industry transitioned from waves to bits, they unknowingly unleashed the engine of their own destruction. Waves are difficult to copy faithfully; every copy introduces noise and distortion. Bits can be copied perfectly every single time. Bits can be compressed and distributed at the speed of light. When digital music met the Web, back in 1993, the Internet Underground Music Archive, a small site running out of the University of California, Santa Cruz, everything changed. Suddenly, anyone could publish music, or download music, to anyone, anywhere. The combination of digital music plus the World Wide Web produced a resonance of sorts, a “sweet spot” which initiated a transformation that continues to this day, with over 42 million iPods and countless other digital music devices. Within this transformation there are countless secondary sweet spots – such as the iPod itself, and Apple’s iTunes Music Store – moments where technology and design meet in glorious union, producing prodigious amounts of heat and light. Like a spark to petrol, when design meets capability, the results can be explosive.
Like the music industry before them, the studios are confronting the cost of their transition from waves to bits. A DVD provides four times the picture quality of a VHS recording, together with 5.1 surround sound. It performs this magic by encoding a very high-bandwidth video signal into a relatively low-bandwidth data stream. This was high magic back in 1991, when the MPEG-2 standard was developed. Now it’s old tech. You can now squeeze a two hour movie into one-tenth the space, with no loss in quality. And that has changed everything about how we use video.
The first folks to realize this were a group of engineers who’d broken away from Silicon Graphics after working on Time-Warner’s Full Service Network, better known as “The Orlando Project.” This test bed (in Orlando, Florida) wired 1500 homes to very high-speed cable modems, and each home connected to the service through their own $60,000 Silicon Graphics workstation. The goal of The Orlando Project was to develop the future of video delivery – in other words, the system which would replace the analog cable systems which had by then fully penetrated the US market. Years ahead in interface design, The Orlando Project fully employed the 3D capabilities of the SGI workstation to create something known as “The Carousel,” which allowed home users to select from about 500 different offerings. (At the time, this was an order of magnitude more than any competitive offering.) The design of The Carousel – spearheaded by Dale Herigstad, who would go on to design the interface for Microsoft’s Media Center, and its Xbox 360 – attempted to guide the user through a bewildering set of video selections in a straightforward manner. While consumers liked The Carousel, Time-Warner cancelled the project to focus on other, less costly digital cable ventures. The engineers at Silicon Graphics, intrigued by what they’d started, soon left to form their own company.
In 1999 the Full Service Network bore unexpected fruit. TiVO, the company founded by those refugees from SGI, introduced its first “personal video recorder.” The idea of recording video to a hard drive for later playback was not new; electronic program guides had been used by cable companies for years. Yet, when these two technologies were integrated around an exceptionally well-designed user interface, another resonance struck, and a sweet spot appeared, one which is utterly transforming the way we think of video. People who could never hope to program a VCR have bought TiVOs in droves, recording all their favorite programs, and watching, on average, 60% more television than individuals who don’t have TiVOs. However, TiVO makes it exceptionally easy to fast-forward through commercial breaks, which is a plus for the audience, but a big concern to the broadcaster. By 2009, there’ll be at least a 30% drop-off in eyeballs watching TV advertisements, all because of TiVO and its many imitators. But the “TiVO effect” is far more profound. TiVO has disconnected any relationship between the network and the audience. The audience is watching a personalized stream of programming, one which bears no fundamental relationship to its source.
I discovered this TiVO effect when one of my friends – who has owned a TiVO for five years – recommended that I watch Making the Band: INXS. I asked him what network it was on. He thought for a long moment, and then said, “I have no idea.” After such along period of time with TiVO, the idea of broadcaster and programming have disassociated; it’s all just programs, on his TiVO. TiVO has become the broadcaster.
This transformation in audience behavior wrought by TiVO points up an essential relationship between design and technology: where they meet in harmony, they produce a new medium. TiVO is the medium, and “the medium is the message.” TiVO has fundamentally changed the relationship between audience and programming; now that TiVOs are broadband-connected, they don’t even need television receivers. TiVOs could download programming directly from the Internet, or take recorded programs, and transmit them to anywhere on the Internet. The latest of TiVO’s competitors, the Slingbox, does this perfectly. I can connect a Slingbox at home in Surry Hills and watch any programming it has recorded, anywhere in the world. Not only have I disconnected the programming from the broadcaster, I’ve cut the cord to my television set. Now my television is anywhere I might be.
Still, TiVO and Slingbox have clung to the idea that there is a content source – that is, the television broadcaster – and an audience hungry for that content. That’s no longer true. With the recent advent of the Video iPod, the iTunes Video Store, Google Video, YouTube, and the ever growing influence of peer-to-peer file-sharing networks, the balance of content is shifting away from the broadcasters to the “peer-productions” of the audience.
This is the revolution that’s waiting to happen. Right now there is no easy way for your average television viewer to find and view the enormous range of content that’s out on the Internet. File-sharing networks are either illegal, dangerous or too difficult for the average audience member to master. Google Video and YouTube must be viewed on a computer. None of the pieces fit together. Yet. And although the Video iPod can be plugged into a television set, very few people do it. It’s still too clumsy.
There is a resonance here, something that’s just on the cusp of happening. Someone (and it could well be Apple) will find a way to tie the television into the Internet meaningfully, formally breaking the bond between the television-as-radio-receiver and television-as-output-device. When that happens, the meaning of television channels and broadcasters will begin to fade into significance. We’ll still watch broadcasts of live events – such as news or sport – but otherwise our televisions will be portals into the ever-increasing supply of peer-produced programming. All we need to do is locate the sweet spot, the harmonious meeting point between design and technology.
It’s widely believed that technology is not informed by design disciplines. Nothing could be further from the truth. Without design, technology remains locked into a culture of expertise. Design-led technologies – such as TiVO and the iPod – transform our expectations and our behavior. Technology alone can not do that. It hasn’t the capability. We need to adjust our thinking. Design is not the handmaiden of technology. It’s the other way around. Design must be in the driver’s seat. Without the resonance which brings mind and hand together meaningfully, all we’ll ever have is unrealized potential. When design drives technology, when we assert that human needs trump raw capability, we create the artifacts which change the world.