I: No Fate
I started using the World Wide Web in October of 1993. To say that the Web was primitive and ugly at that early date is to miss the point completely, making fun of a baby just emerged from the womb. It was as beautiful and full of potential as a new-born child for those who could see past what it was and look toward what it might become. I’d been an apostle of hypertext for well over a decade before the Web came around, so I was ready. I knew what it portended. Even so, the past seventeen years have surpassed my wildest expectations.
I am forty-seven years old; in seventeen years I will be sixty-four. It is as difficult to predict the Web of 2027 from 2010 as it was to predict the Web of 2010 from 1993. Too much relies upon the ‘sensitive dependence on initial conditions’. A teenager programming in a bedroom in Melbourne or Chongquing or Moscow could do something that changes everything. For example, back in 1993 my friend Kevin Hughes decided to see if he could put an HTML anchor tag – which creates a hyperlink – around an image tag – which displays an image within the web page. Voila – the Web button was born! Most of the links we click on today are buttons, and most of us have no idea that the Web button isn’t even inferred in the HTML specification. Someone had to be inventive, to try an experiment, and – once it succeeded – share the results with the world. That invention sent the Web into certain directions which led to the Web we have today. If Kevin hadn’t developed the button, the Web might have remained text-based for much longer, which would have altered our experience and expectations.
Even more interesting are the technologies that get invented, then lay dormant for years, suddenly springing into life. The wiki – essentially a web page which can be edited in place – was invented at the University of Hawaii in 1995. Though it found a few modest uses, until in the early years of this decade, when Wikipedia burst upon the scene, people did not comprehend the power of a Web with an ‘edit’ button attached to it. Today a web page is considered somewhat dysfunctional unless editable by its users.
This happened to my own work. In 1994 Tony Parisi and I blended my own work in virtual reality with the very first Web technologies to create the Virtual Reality Modeling Language, a 3D companion for HTML. We had some ideas what it could be used for, and when we offered it up to the world as an early open source software project, others came along with their own, amazing ideas: 3D encyclopedias whose representation reflected the tree of knowledge; animated tools for teaching American Sign Language; a visualization for the New York Stock Exchange which enabled a broker to absorb five thousand times as much information as was possible from a simple text display. The future seemed rosy; Newsweek magazine dedicated a colourful two-page spread to the wonders of VRML.
But the future rarely arrives when planned, or in the form we expect. Most PCs of the early Internet era didn’t have the speed required to display 3D computer graphics; they could barely keep up with a mixture of images and text on a web page. And in the days before broadband, downloading a 3D model could take minutes. Far longer than anyone cared to wait. For users who had barely gotten their minds around the 2D web of HTML, asking them to grasp the 3D worlds of VRML was a bridge too far. Until Toy Story and the Playstation came out in the mid-1990s, most people had no exposure to 3D computer graphics; today they’re commonplace, both in the cinema and in our living rooms. We know how to use 3D to entertain us. But 3D is still not a common part of our Web experience. That is now changing with the advent of WebGL, a new technology which makes it easy to create 3D computer graphics within the Web browser. It took sixteen years, but finally we’re seeing some great 3D on the Web.
The seeds of the future are with us in the present; it is up to us to water them, tend them, and watch them grow. One of those seeds, with us in just this moment, is the ability to slap a GPS tracker on anything – a package or a person or a truck – hook it up to some sort of mobile, and instantly be aware of its every movement. Parents give their children mobiles which relay a constant stream of location data to a website that those parents can use to monitor the child’s current whereabouts. If this seems a trifle Orwellian, consider the tale told by Intel anthropologist Dr. Genevieve Bell, who interviewed a classroom of South Korean children, all of whom had these special tracking mobiles. Did these devices make them feel too closely watched, too hemmed in? Surprisingly, the children pointed to another member of the class, saying, “See that poor kid? Her parents don’t love her enough to get her a tracking mobile.” These kids love Big Brother.
Conversely, every attempt to place GPS trackers on the buses of Sydney – so that the public could have a good idea when the next bus will actually arrive at the stop – has been subject to furious rejection by the bus drivers’ union. This, they claim, will allow the bosses to monitor their every movement. It is Orwellian. A bridge too far. As a pedestrian in Sydney, constantly at the whim of public transport, I have to suffer because Sydney’s bus drivers believe their right to privacy on-the-job extends to the tool of their trade, that is, the bus itself. I could argue that as a fare-payer, I have every right to know just where that bus is, and how long it will take to arrive. There is the dilemma: How do we protect people, and make them feel secure in a world where ever more is being tracked? In Melbourne all the trams are GPS tagged, and anyone can look up the precise location of any tram at any time. It’s the future, the coming thing, and Melbourne has simply gotten there first.
In 2010 it is relatively expensive and somewhat bulky and power-hungry to track something, but in seventeen years it will be easy and tiny and cheap and probably solar-powered. We will track everything of importance to us, all the time, and that leaves us with two big questions: How will we deal with all of this locative data pouring upon us constantly? And who needs access to this data? How can I provide access without compromising myself and my privacy? As more things are tracked more comprehensively, it becomes possible to track something by absence as well as by presence. The missing item sticks out. That’s the kind of tool that governments, police and gangsters will all find useful. Which means we’ll learn the art of hiding in plain sight, of disguising our comings and goings as something else altogether, a kind of magician’s redirection of the audiences’ gaze.
There’s no escaping a future which is continuously recorded, tracked, and monitored. This is the ‘Database Nation’ presented so chillingly in its more paranoid renderings. But it’s only frightening if we deny ourselves agency, if we act as though we are merely prey to forces of capital and power far outside of our own control, if we simply surrender ourselves to the death of a thousand transactions. And if we stumble into this future unconsciously, that’s pretty much what we’ll get. Others will make the decisions we refused to make for ourselves. But that’s not the way adults behave. Adults are characterized by agency: they shape events where possible, and where that isn’t possible, they do their best to maintain some awareness of the events shaping them.
It is possible to resist, to push back against the forces which seek to measure us, monitor us, and ask us to comply. Sydney’s bus drivers have done this, and as much as I might rue the shadow their decision casts over my own ability to plan my travel, I do not fault their reasons. We should all consider how we need to push back against forces which seek to intrude and thereby control us. We could benefit from a bit more disorder, disobedience, and resistance. The future is unwritten. Its seeds need not grow. We can ignore them, or even cut them down when they spring up. And we can plant other seeds, which enhance our agency, making us more powerful.
II: Close To You
It seems as though humans are solitary, almost lonely creatures. If lucky, we manage to find a partner to pass through life with, to share and shoulder the burden of being. We may have a few children, who we care for until they enter university or head off to work, and whom we see sporadically thereafter, just we only occasionally visit our own parents. This life pattern, known as the ‘nuclear family’, first identified in the middle of the 20th century, feels as though it’s the way things have always been – because it’s the way things have always been for us. There may be subtle differences here and there – grandparents caring for grandchildren while parents work, or children who refuse to leave the nest even into their 30s – but these exceptions prove the existence of a rule which binds us all to certain socially acceptable behaviors, and sets our range of expectations.
This is not the way things have always been. This is not the way things were at any point before the beginning of the 20th century. Prior to that, entire families lived together, grandparents and grandchildren, aunts and uncles and cousins, everyone under one roof, pooling resources and pulling together to keep the family alive. That life pattern goes well back into history – at least a few thousand years, probably all the way to the beginning of agriculturization. Before that, we lived within the close bonds of the tribal unit, foraging and hunting and moving continuously through the landscape. That life pattern goes back countless millions of years, well before the emergence of a recognizable human species.
The tribe has always been large, much larger than the family unit, so it’s not surprising that the Nuclear Era leaves us feeling somewhat lonely, with the suspicion that something’s gone missing, that we’re not quite fulfilled. We evolved in the close presence of others – and not just a few others. We need that community in order to know who we are. We were divorced from that complementary part of ourselves in the race into modernity. We got lots of kit, but we lost a part of our soul.
This goes a long way to explain why the essential social technology – the mobile – has become such a roaring, overwhelming success. The mobile reconnects us to the community our ancestors knew intimately and constantly. Our family, friends and coworkers are no more than a few seconds away, always at hand. We can look at our call logs and SMS message trails and get a good sense of who we really feel connected to. If you want to know where someone’s heart is, follow their messages. That sinking feeling you get when you realize you’ve left your mobile at home – or, heavens forbid, misplaced it – is a sensation of amputation. You feel cut off from the community that could help when you need it, or simply be there to listen.
Just a few years after we all acquired our mobiles, this social technology gained a double in the online world with the emergence of social networks such as Friendster and MySpace. These social networks provide a digital scaffolding for the relationships we once enjoyed in our tribes. They are a technology of retribalization, a chance to recover something lost. We seem to instinctively recognize this, else why would Facebook have grown from 20 to over 500 million members in just three years? This is what it looks like when people suddenly find themselves with the ability to fulfill a long-term need. This is not a new thing. This is a very old thing, a core part of humanity coming to the fore.
At first this all this connecting seems innocuous, little more than old friends becoming reacquainted after a long separation. Nothing could be further from the truth, because these connections are established for a reason. Some connections are drawn from the bonds of blood, others from friendship, others from financial interest (your co-workers), still others because you share some common passion, or goal, or vision. It’s these last few which most interest me, because these are unpredicted, these aren’t simply the recovery of a prehistoric community, a recapitulation of things we already know. These are connections with a purpose.
But what purpose?
Just in the past two or three years, researchers have been examining social networks – the real ones, not the online version – to understand what role they play in our lives. The answers have been stunning. It’s now been demonstrated that obesity and slimming spread through social networks: if you’re overweight it’s more likely your friends are, and if they go on a diet, you’re more likely to do so. The same thing holds true for smoking and quitting. Most recently it was shown that divorce spreads through social networks: a married couple with friends who are divorcing stands a greater chance of divorcing themselves.
Both obesity and smoking are public health issues; divorce is both a moral issue and a cultural hot potato. We all know the divorce rate is high, but we haven’t had any good suggestions for how to bring that rate down – or consensus on whether we should. But these studies seem to indicate that a tactic of strategic isolation of the divorcing from the married might go some distance to lowering the divorce rate. In that sense, divorce itself becomes another ‘social disease’, and epidemiologists might be expected to track known cases through the community.
It all sounds a bit weird, doesn’t it? Yet if someone were to suggest education and incentives to get these same networks to spread anti-smoking behaviors, we’d have the full weight of the state, the health system, and the community behind it. Someone will suggest just that sometime in the next few years, with a growing awareness of the power of our communities to shape our behavior as individuals.
Yet these are just the obvious features of social networks. Their power to define your identity and behavior go far beyond this. Consider: someone can walk into a bank today and steal your identity, taking out a loan in your name, if they present the proper documentation. How is this possible? It’s because the points system we use here in Australia – and equivalent algorithms used throughout the world to establish identity – can be fooled. Stuff the right documents in, and out pop the appropriate approvals. But shouldn’t I be required to provide the proof of others? Isn’t my identity contingent upon others willing to attest to it? This isn’t the way we think of identity, it certainly doesn’t fit into any neat legal category, but it is how identity works in practice. This is how identity has always worked. People ‘run away from home’ to establish a new identity precisely because their identities are defined and constrained by those they are connected to, often in opposition to their own desires.
When you walk into the bank to apply for that loan, you need to provide identification; really, you should hand the bank your ‘social graph’ – the enumerated set of your connections – and let the bank judge your identity from that graph. ASIO and MI5 and the CIA can already analyze your social graph to learn if you’re a terrorist, or a terrorist sympathizer; surely your bank can write a little bit of software which can confirm your identity? It would help if the bank understood the strength of each of your connections, by analyzing the number of messages that have hopped the gap between you and those connected to you. From this the bank would know who they should be asking to vouch for you.
All of this sounds complicated, and will probably be more involved than our simple but spoofable systems in use today. The end result will be a system with much greater resilience, and much harder to fool, because we’re capitalizing on the fact that identity is a function of our community. And not just identity: talent is also something that is both a function of and a recognized value within a community. LinkedIn provides a mechanism for individuals to present their social graphs to potential employers; that social graph tells a recruiter more about the individual than any c.v.
The social graph is the foundation for identity; it always has been, but during the last hundred years we fragmented and atomized, and our social graphs began to atrophy. We have now retrieved them, and because of that we can demonstrate that our value is derived from what others think of us. This has always been true, but now those others are no Stone Age tribe, but rather represent communities of expertise which may be global, highly specific, and fiercely competitive. These new communities have so much collected, connected capability they can ignore all of the neat boundaries of an organization, can play outside the silos of the business and government, and do as they please. A group of well-connected, highly empowered individuals is a force to be reckoned with. It always has been.
III: Senior Concessions
Last month the “Health Lies in Wealth” report surprised almost no one when it announced that the wealthiest among us live, on average, three years longer than the poorest. The report identified many cofactors to life expectancy, such as graduating school, owning your own home, and – most surprisingly – the presence of a strong social network. People who live alone do not thrive. We know that in our bones. We understand that ‘no man is an island’, that we actually do need one another to survive, just as we always have. Only in close connection with others can we receive the support we need to live out the full span of our lives. This support might help us to maintain our weight, or quit smoking, or stay faithful, or simply remind us to take care of ourselves. Whatever form it comes in, it has become clear that it is essential.
If it is essential, should we leave it to the ad hoc, ‘natural’ social networks we’ve all be blessed with (and which fail, for some)? Shouldn’t we apply what we’ve learned about digital social networks directly to our well-being? This is something that a fourteen year-old wouldn’t think about as they sign up for a Facebook account, but when I’m sixty-four, it will be foremost in my mind. How can my network keep me healthy? How can my network assist me in wellness?
At one level this is completely obvious: the tight circle of family and friends, better connected than they ever have been, with better tools both for messaging and monitoring, allow us to ‘look in’ on one another in a way we never have before. A case in point: my morning medication to control my blood pressure – did I take it? Sometimes even I can’t remember until I’ve looked at the packaging. When I get a little older, and a bit more absent-minded, this will become a constant concern. We’ve already seen the first medicine cabinets which record their openings and closings, the first pill bottles which note when they’ve been used, all information that is monitored, collated, and which can then be distributed through ad-hoc familial or more formal digital social networks.
When the next version of Apple’s iPad comes out early next year, it will have a built-in camera to enable video conferencing. One of my good friends – who lives on the other side of the continent from his elderly parents – will buy them an iPad, and Velcro it to the wall of their kitchen, so that he can always ‘beam in’ and see what’s going on, and so that, for them, he’s no more than a tap away. That’s the first and somewhat clumsy version of systems which will continuously monitor the elderly, the frail, and the troubled. Who’s going to be on the other side of all of those cameras? Loved ones, mostly, though some of that will be automated, as seems prudent. This is the inverse of the ‘surveillance culture’ of pervasive CCTV cameras observed by police and counter-terrorism officials; in this world of ‘sousveillance’, everyone is watching everyone else, all the time, and all to the good. And, unlike Sydney’s bus drivers, we’ll recognize the value of this close monitoring, because it won’t represent an adversarial relationship. No one will be using this data to wreck our careers or disturb our lives. We’ll be using it to help one another live longer, and healthier.
What about connections that are slightly less obvious? We’ve already seen the emergence of ‘Wikimedicine’, where patients band together and share information in an attempt to go beyond what specialists are willing to do in treating a particular condition. These communities are, quite naturally, full of hoaxes and quacks and misinformation of every conceivable type: individuals fighting for their lives or waging war against chronic illnesses are susceptible to all sorts of tricks and flim-flammery and honest and earnest failures in understanding. This has happened because individuals enter these networks of hope without their sensible networks of trust. We have no way to present our social graphs to one another in these environments, to show our bona fides. If we could (and I’m sure we will soon be able to do so) we could quickly establish who brings real value, insight and wisdom into a conversation. We would also be able to identify those who seek to confuse, or who are confused, and those who are self-seeking. That would be clear from their social graphs. This is a trick eBay learned long ago: if you can see how a buyer or seller has been rated, you have some sense of whether they’ll be reputable. Such systems are never perfect, but we can expect a continuous improvement in our own ability to detect fraudulent social graphs over the next several years.
While these Wikimedicine networks are interesting and will grow in number, they tend to be exclusive of the medical community, turning their back upon it, in the search for more effective treatments. That creates a gap which must be filled. As doctors and nurse practitioners grow more comfortable with a close connectivity with their patients, we’ll see the emergence of a new kind of medical network, one which places the patient at the center, and which radiates out in a few directions: to the patient’s family and social graph; to the patient’s medical team and their professional social graph; to the patient’s community of the co-afflicted. Each of these communities, effectively isolated from one another at present, will grow closer together in order to improve the welfare of the patient.
As these communities grow closer together, knowledge will pass from one community to another. The doctor will remain the locus of knowledge and experience, but already some of that power is passing to the nurse practitioner, who acts as a mid-way point between the doctor and the broader, connected community. The nurse practitioner will need to act as the ‘filter’, ensuring that the various requests and inquiries that come from the community are addressed, but in such a way that the doctor still has time to work. That’s not a secretarial role, but rather, a partnership of professionals. Deep knowledge is required to stand between the doctor and the community; as time goes on, as knowledge is transferred to the community, the community empowers itself and assumes some of the functions of both doctor and nurse practitioner.
When I’ve expressed such thoughts to medical professionals, they reject them out of hand. They contend that there is too much experience, too much knowledge resident in the body of the doctor for that capacity to spread safely. I doubt this is as true as they might wish it to be. Yes, medicine is rich and detailed and draws from the physician’s extensive body of experience, but we are building systems which can provide much of that, on demand, to almost anyone. We won’t be getting rid of the physician – far from it – but the boundaries between the physician and the community will become fuzzier, with the physician remaining the local expert, but not the only one.
This is a new kind of medicine, a new kind of wellness, a system we will not see fully in place until well after I turn sixty-four. Perhaps by the time I’m eighty-four – in 2046 – medicine will have ‘melted’ into a more communal form. Until then, from a policy point of view – since you are the people who make policy – I’d advise that you tend toward flexibility. Rigidity is a poor fit for a highly-connected world. People will tend to ignore rigid structures, creating their own ad-hoc organizations which will compete with and eventually displace yours, if they serve the needs of the patient more effectively.
At the same time, the dilemmas of a highly-monitored world will become more and more prevalent. We treasure medical privacy, but what we really mean by this is that we want medical data to be freely available to everyone who needs it, while securely protected from anyone who does not. This is a problem that can only be resolved if the patient has some agency in authorizing access to medical records, and tools that can track that access. Without those tools, the patient will lose track of who knows what, and it becomes easier for someone who shouldn’t to have a look in. As our medical records spread through our networks of expertise – the better to treat us – we may lose our fear and feel more willing to surrender our privacy. We’re a long way away from that world, but we can see how it may eventuate.
As I said at the beginning, it’s difficult to know the shape of the future. So much depends on the actions we take today, the seeds we choose to water. I have shown you a few of these seeds: a world where everything is monitored; a human universe grown close with ever-present social networks; a medicine more diffuse and more effective than the one we practice today. All of these seeds are present in this moment, all of them will affect you in your work, all will drive your decisions, and – should you ignore them – all will force you into sudden policy responses. The future is connected in a way we could not conceive of a generation ago, in a way that our great-great-grandparents would consider unremarkable. We’re returning to an old place, but with new tools, and that combination will change everything, whether or not we see it coming, whether or not we want it to come.