Flexible Futures

I: A Brief Tour of the Future

During my first visit to Sydney, in 1997, I made arrangements to catch up with some friends living in Drummoyne.  I was staying at the Novotel Darling Harbour, so we agreed to meet in front of the IMAX theatre before heading off to drinks and dinner.  I arrived at the appointed time, as did a few of my friends.  We waited a bit more, but saw no sign of the missing members of our party.  What to do?  Should we wait there – for goodness knows how long – or simply go on without them?

As I debated our choices – neither particularly palatable – one of my friends took a mobile out of his pocket, dialed our missing friends, and told them to meet us at an Oxford Street pub.  Crisis resolved.

Nothing about this incident seems at all unusual today – except for my reaction to the dilemma of the missing friends.  When someone’s not where they should be, where they said they would be, we simply ring them.  It’s automatic.

In Los Angeles, where I lived at the time, mobile ownership rates had barely cracked twenty percent.  America was slow on the uptake to mobiles; by the time of my visit, Australia had already passed fifty percent.  When half of the population can be reached instantaneously and continuously, people begin to behave differently.  Our social patterns change.  My Sydneysider friends had crossed a conceptual divide into hyperconnectivity, while I was mired in an old, discrete and disconnected conception of human relationships.

We rarely recall how different things were before everyone carried a mobile.  The mobile has become such an essential part of our kit that on those rare occasions when we leave it at home or lose track of it, we feel a constant tug, like the phantom pain of a missing limb.  Although we are loath to admit it, we need our mobiles to bring order to our lives.

We can take comfort in the fact that all of us feel this way.  Mobile subscription rates in Australia are greater than one hundred and twenty percent – more than one mobile per person, one of the highest rates in the world.  We have voted with our feet, with our wallets and with our attention.  The default social posture in Australia – and New Zealand and the UK and the USA – is face down, absorbed in the mobile.  We stare at it, toy with it, play on it, but more than anything else, we reach through it to others, whether via voice calls, text messages, Facebook, Twitter, or any of an constantly-increasing number of ways.

The mobile takes the vast, anonymous and unknowable world, and makes it pocket-sized, friendly and personal.  If we ever run into a spot of bother, we can bring resources to hand – family, friends, colleagues, even professional fixers like lawyers and doctors – with the press of ten digits.  We give mobiles to our children and parents so they can call us – and so we can track them.  The mobile is the always-on lifeline, a different kind of 000, for a different class of needs.

Because everyone is connected, we can connect to anyone we wish.  These connections needn’t follow the well-trodden paths of family, friends, neighbors and colleagues.  We can ignore protocol and reach directly into an organization, or between silos, or from bottom to top, without obeying any of the niceties described on org charts or contact sheets.  People might choose to connect in an orderly fashion – when it suits them.  Generally, they will connect to their greatest advantage, whether or not that suits your purposes, protocols, or needs.  When people need a lifeline, they will turn over heaven and earth to find it, and once they’ve found it, they will share it with others.

Connecting is an end in itself – smoothing our social interactions, clearing the barriers to commerce and community – but connection also provides a platform for new kinds of activities.  Connectivity is like mains power: once everywhere, it becomes possible to imagine a world where people own refrigerators and televisions.

When people connect, their first, immediate and natural response is to share.  People share what interests them with people they believe share those interests.  In early days that sharing can feel very unfocused.  We all know relatives or friends who have gone online, gotten overexcited, and suddenly start to forward us every bad joke, cute kitten or chain letter that comes their way.  (Perhaps we did these things too.)  Someone eventually tells the overeager sharer to think before they share.  They learn the etiquette of sharing.  Life gets easier (and more interesting) for everyone.

As we learn who wants to know what, we integrate ourselves into a very powerful network for the dissemination of knowledge.  If it’s important to us, the things we need to know will filter their way through our connections, shared from person to person, delivered via multiple connections.  In the 21st century, news comes and finds us.  Our process of learning about the world has become multifocal; some of it comes from what we see and those we meet, some from what we read or watch, and the rest from those we connect with.

The connected world, with its dense networks, has become an incredibly efficient platform for the distribution of any bit of knowledge – honest truth, rumor, and outright lies.  Anything, however trivial, finds its way to us, if we consider it important.   Hyperconnectivity provides a platform for a breadth of ‘situational awareness’ beyond even the wildest imaginings of MI6 or ASIO.

In a practical sense, sharing means every employee, no matter their position on the org chart, can now possess a detailed awareness your organization.  When an employee trains their attention on something important to them, they see how to connect to others sharing similar important information.

We begin by sharing everything, but as that becomes noisy (and boring), we focus on sharing those things which interest us most.  We forge bonds with others interested in the same things.  These networks of sharing provide an opportunity for everyone to involve themselves fully within any domain deemed important – or at least interesting.  Each sharing network becomes a classroom of sorts, where anyone expert in any area, however peculiar, becomes recognized, promoted, and well-connected.  If you know something that others want to know, they will find you.

In addition to everything else, we are each a unique set of knowledge, experience and capabilities which, in the right situation, proves uniquely valuable.  By sharing what we know, we advertise our expertise.  It follows us where ever we go.   Because this expertise is mostly hidden from view, it is impossible for us to look at one another and see the depth that each of us carries within us.

Every time we share, we reveal the secret expert within ourselves.  Because we constantly share ourselves with our friends, family and co-workers, they come to rely on what we know.  But what of our colleagues?  We work in organizations with little sense of the expertise that surrounds us.

Before hyperconnectivity, it was difficult to share expertise.  You could reach a few people – those closest to you – but unless your skills were particularly renowned or valuable, that’s where it stopped.  For good or ill, our experience and knowledge now  extend far beyond the circle of those familiar to you, throughout the entire organization.  Everyone in it can now have some awareness of the talents that pulse through your organizations – with the right tools in place.


II: Mobility & Flexibility

Everyone now goes everywhere with a mobile in hand.  This means everyone is continually connected to the organization.  That has given us an office that has no walls, one which has expanded to fill every moment of our lives.  We need to manage that relationship and the tension between connectivity and capability.  People can not always be available, people can not always be ‘on’.  Instead, we must be able to establish boundaries, rules, conventions and practices which allow us to separate work from the rest of our lives, because we can no longer do so based on time or location.

We also need some way to be able to track the times and places we do work.  We’re long past the days of punching a timeclock.  In a sense, the mobile has become the work-whistle, timeclock and overseer, because it is the monitor.  This creates another tension, because people will not be comfortable if they believe their own devices are spying on them. Can organizations walk a middle path, which allows the mobile to enable more employee choice and greater freedom, without eternally tethering the employee to the organization?

This is a policy matter, not a technology matter, but technology is forcing the hand of policy.  How can technology come to the aid of that policy?  How can I know when it might be appropriate to contact an employee within my organization, and when it would be right out?  This requires more than a quick glance at an employee schedule.  The employee, mobile in hand, has the capacity to be able to ‘check in’ and ‘check out’ of availability, and will do so if it’s relatively effortless.  Employees can manage their own time more effectively than any manager, given the opportunity.

It’s interesting to note that this kind of employee-driven ‘flextime’ has been approaching for nearly thirty years, but hasn’t yet arrived.  Flextime has proven curiously inflexible.  That’s a result of the restricted communication between employee and organization, mostly happening within the office and within office hours.  Now that communication is continuous and pervasive, now that the office is everywhere, flextime policies must be adjusted to accommodate the continuously-evolving needs of the organization’s employees.  The technology can support this – and we’re certainly flexible enough.  So these practices must come into line with our capabilities.

As practice catches up with technology, we need to provide employees with access to the tools which they can use to manage their own work lives.  This is the key innovation, because empowering employees in this way creates greater job satisfaction, and a sense of ownership and participation within the organization.  Just as we can schedule time with our friends or pursuing our hobbies, we should be able to manage our work lives.

Because we rely so heavily on mobiles, we lead very well-choreographed lives.  Were we to peek at a schedule, our time might look free, but our lives have a habit of forming themselves on-the-fly, sometimes only a few minutes in advance of whatever might be happening.   We hear our mobile chime, then read the latest text message telling us where we should be – picking up the kids, going to the shops, heading to a client.  Our mobiles are already in the driver’s seat.  Fourteen years ago, when I sat at Darling Harbour, waiting for my late friends, we had no sense that we could use pervasive mobile connectivity to manage our schedules and our friends’ schedules so precisely.  Now, it’s just the way things are.

Do we have back office practices which reflect this new reality?  Can an employee poke at their mobile and know where they’re expected, when, and why?  By this, I don’t mean calendaring software (which is important), but rather the rest of the equation, which allows employee and employer to come to a moment-by-moment agreement about the focus of that employment.

This is where we’re going.  The same processes at work in our private lives are grinding away relentlessly within our organizations.  Why should our businesses be fundamentally more restrictive than our family or friends, all of whom have learned how to adapt to the flexibility that the mobile has wrought?  This isn’t a big ask.  It’s not as though our organizations will tip into chaos as employees gain the technical capacity to manage their own time.  This is why policy is important.  Just because anything is possible doesn’t mean it’s a good idea.  Hand-in-hand with the release of tools must come training on how these tools should be used to strengthen an organization, and some warnings on how these same tools could undermine an organization whose employees put their own needs consistently ahead of their employer.

Once everyone has been freed to manage their own time, you have a schedule that looks more like Swiss cheese than the well-ordered blocks of time we once expected from the workforce.  Every day will be special, a unique arrangement of hours worked.  Very messy.  You need excellent tracking and reporting tools to tell you who did what, when, and for how long.  Those tools are the other side of the technology equation; give employees control, and you create the demand for a deeper and more comprehensive awareness of employee activities.

Managers can’t spend their days tracking employee comings and goings.  As our schedules become more flexible and more responsive to both employee and organizational needs, the amount of information a manager needs to absorb becomes prohibitive.  Managers need tools which boil down the raw data into easily digestible and immediately apprehensible summaries.

Not long ago, I did quite a bit of IT consulting for a local council, and one thing I heard from the heads of each of the council’s departments, was how much the managers at the top needed a ‘dashboard’, which could give them a quick overview of the status of their departments, employee deployment, and the like.  A senior executive needs to be able to glance at something – either on their desktop computer, or with a few pokes on their mobile – and know what’s going on.

Something necessary for the senior management has utility throughout the organization.  The kind of real-time information that makes a better manager also makes a better organization, because employees who have access to real-time updates can change their own activities to meet rising demands.  The same flexibility which allows employees to schedule themselves also creates the opportunity for a thoroughly responsive and reconfigurable workforce, able to turn on a dime, because it is plugged in and well-aware of the overall status of the organization.

That’s the real win here; employees want flexibility to manage their own lives, and organizations need that flexibility to able to respond quickly to both crises and opportunities.  The mobile empowers both employee and organization to meet these demands, provided there is sufficient institutional support to make these moment-to-moment changes effortless.

This is a key point.   Where there is friction in making a change, in updating a schedule, or in keeping others well-informed, those points of friction become the pressure points within the organization.  An organization might believe that it can respond quickly and flexibly to a crisis, only to find – in the midst of that crisis – that there is too much resistance to support the organizational demand for an instant change of focus.  An organization with too much friction divides its capabilities, becoming less effective through time, while an organization which has smoothed away those frictions multiplies its capabilities, because it can redeploy its human resources at the speed of light.

Within a generation, that’s the kind of flexibility we will all expect from every organization.  With the right tools in hand, it’s easy to imagine how we can create organizations that flow like water while remaining internally coherent.  We’re not there yet, but the pieces are now in place for a revolution which will reshape the organization.


III: Exposing Expertise

It’s all well and good to have a flexible organization, able to reconfigure itself as the situation demands, but that capability is useless unless supported by the appropriate business intelligence.  When a business pivots, it must be well-executed, lest it fly apart as all of the pieces fall into the wrong places, hundreds of square pegs trying to fill round holes.

Every employee in an organization has a specific set of talents, but these talents are not evenly distributed.  Someone knows more about sales, someone else knows more about marketing, or customer service, or accounting.  That’s why people have roles within an organization; they are the standard-bearers for the organization’s expertise.

Yet an employee’s expertise may lie across several domains.  Someone in accounting may also provide excellent customer service.  Someone in manufacturing might be gifted with sales support.  A salesman might be an accomplished manager.  People come into your organization with a wide range of skills, and even if they don’t have an opportunity to share them as part of their normal activities, those skills represent resources of immense value.

If only we knew where to find them.

You see, it isn’t always clear who knows what, who’s had experience where, or who’s been through this before.  We do not wear our employment histories on our sleeves.  Although we may enter an organization with our c.v. in hand, once hired it gets tucked away until we start scouting around for another job.  What we know and what we’ve done remains invisible.  Our professional lives look a lot like icebergs, with just a paltry bit of our true capabilities exposed to view.

One of the functions of a human resources department is to track these employee capabilities.  Historically, these capabilities have been strictly defined, with an appropriately circumscribed set of potentials.  Those slots in the organization are filled by these skills.  This model fit well well organizations treasured stability and order over flexibility and responsiveness.  But an organization that needs to pivot and reorient itself as conditions arise will ask employees to be ready to assume a range of roles as required.

How does an organization become aware of the potential hidden away within its employees?

I look out this afternoon and see an audience, about whom I know next to nothing.  There are deep reservoirs of knowledge and experience in this room, reservoirs that extend well beyond your core skills in payroll and human resources.  But I can’t see any of it.  I have no idea what we could do together, if we had the need.  We probably have enough skills here to create half a dozen world-class organizations.  But I’m flying blind.

You’re not.  Human resources is more than hiring and compliance.  It is an organizational asset, because HR is the keeper of the human inventory of skills and experiences.   As an employee interviews for a position and is hired, do you translate their c.v. into a database of expertise?  Do you sit them down for an in-depth interview which would uncover any other strengths they bring into the organization?  Or is this information simply lying dormant, like a c.v. stashed away in a drawer?

The technology to capture organizational skills is already widely deployed.  In many cases you don’t need much more than your normal HR tools.  This isn’t a question of tools, but rather, how those tools get used.  Every HR department everywhere is like a bank vault loaded up with cash and precious metals.  You could just close the vault, leaving the contents to moulder unused.  Or you can take that value and lend it out, making it work for you and your organization.

That’s the power of an HR department which recognizes that business intelligence about the intelligence and expertise within your organization acts like a force multiplier.  A small organization with a strong awareness of its expertise punches far above its weight.  A large organization with no such awareness consistently misses opportunities to benefit from its unique excellence.

You hold the keys to the kingdom, able to unlock a revolution in productivity which can take your organizations to a whole new level of capability.  When anyone in the organization can quickly learn who can help them with a given problem, then reach that person immediately – which they now can, given everyone has a mobile – you have effectively swept away much of the friction which keeps organizations from reaching their full potential.

Consider the tools you already employ.  How can they be opened up to give employees an awareness of the depth of talent within your organization?  How can HR become a switchboard of capabilities, connecting those with needs to those who have proven able to meet those needs?  How can a manager gain a quick understanding of all of the human resources available throughout the organization, so that a pivot becomes an effortless moment of transition, not a chaotic chasm of confusion.

This is the challenge for the organizations of the 21st century.  We have to learn how to become flexible, fluid, responsive and mobile.  We have to move from ignorance into awareness.  We have to understand that the organization as a whole benefits from an expanded awareness of itself.  We have to do these things because newer, more nimble competitors will force these changes on us.   Organizations that do not adapt to the workforce and organizational movements toward flexibility and fluidity will be battered, dazed and confused, staggering from crisis to crisis.  Better by far to be on the front foot, walking into the future with a plan to unleash the power within our organizations.

Posted in Uncategorized | 1 Reply

People Power

Introduction: Magic Pudding

To effect change within governmental institutions, you need to be conscious of two important limits.  First, resources are always at a premium; you need to work within the means provided.  Second, regulatory change is difficult and takes time.  When these limitations are put together, you realize that you’ve been asked to cook up a ‘magic pudding’.  How do you work this magic?  How do you deliver more for less without sacrificing quality?

In any situation where you are being asked to economize, the first and most necessary step is to conduct an inventory of existing assets.  Once you know what you’ve got, you gain an insight into how these resources could be redeployed.  On some occasions, that inventory returns surprising results.

There’s a famous example, from thirty years ago, involving Disney.  At that time, Disney was a nearly-bankrupt family entertainment company.  Few went to see their films; the firm’s only substantial income came from its theme parks and character licensing.  In desperation, Disney’s directors brought on Michael J. Eisner as CEO.  Would Eisner need to sell Disney at a rock-bottom price to another entertainment company, or could it survive as an independent firm? First things first: Eisner sent his right-hand man, Frank Wells, off to do an inventory of the company’s assets.  There’s a vault at Disney, where they keep the master prints of all of the studio’s landmark films: Snow White and the Seven Dwarves, Pinocchio, Peter Pan, Bambi, A Hundred and One Dalmatians, The Jungle Book, and so on.  When Wells walked into the Vault, he couldn’t believe his eyes.  Every few minutes he called Eisner at his desk to report, “I’ve just found another hundred million dollars.”

Disney had the best library of family films created by any studio – but kept them locked away, releasing them theatrically at multi-year intervals designed to keep them fresh for another generation of children.  That worked for forty years, but by the mid-1980s, with the VCR moving into American homes, Eisner knew more money could be made by taking these prize assets and selling them to every family in the nation – then the world.  That rediscovery of locked-away assets was the beginning of the modern Disney, today the most powerful entertainment brand on the planet.

When I began to draft this essay, I felt as constrained as Disney, pre-Eisner.  How do you bake a magic pudding?  Eventually, I realized that we actually have incredible assets at our disposal, ones which didn’t exist just a few years ago. Let’s go on a tour of this hidden vault.  What we now have available to us, once we learn how to use it, will change everything about the way we work, and the effectiveness of our work.


I: What’s Your Number?

The latest surveys put the mobile subscription rate in Australia between 110-115%.  Clearly, this figure is a bit misleading: we don’t give children mobiles until they’re around eight years old, nor the most senior of seniors own them in overwhelming numbers.  The vast middle, from eight to eighty, do have mobiles.  Many of us have more than one mobile – or some other device, like an iPad, which uses a mobile connection for wireless data.  This all adds up.  Perhaps one adult in fifty refuses to carry a mobile around with them most of the time, so out of a population of nearly 23 million, we have about 24 million mobile subscribers.

This all happened in an instant; mobile ownership was below 10% in 1993, but by 1997 Australia had passed 50% saturation.  We never looked back.  Today, everyone has a number – at least one number – where they can be reached, all the time.  Although Australia has had telephones for well over a hundred years, a mobile is a completely different sort of device.

A landline connects you to a place: you ring a number to a specific telephone in a specific location.  A mobile connects you to a person. On those rare occasions when someone other than a mobile’s owner answers it, we experience a moment of great confusion.  Something is deeply disturbing about this, a bit like body-snatching.  The mobile is the person; the person is the mobile. When we forget the mobile at home – rushed or tired or temporarily misplaced – we feel considerably more vulnerable.

The mobile is the lifeline which connects us into our community: our family, our friends, our co-workers.  This lifeline is pervasive and continuous.  All of us are ‘on call’ these days, although nearly all of the time this feels more like a relief than a burden.  When the phone rings at odd hours, it’s not the boss, but a friend or family member who needs some help.  Because we’re continuously connected, that help is always there, just ten digits away. We’ve become very attached to our mobiles, not in themselves, but because they represent assistance in its purest form.

As a consequence, we are away from our mobiles less and less; they spend the night charging on our bedstands, and the days in our pockets or purses.

Last year, a young woman approached me after a talk, and said that she couldn’t wait until she could have her mobile implanted beneath her skin, becoming a part of her.  I asked her how that would be any different than the world we live in today.

This is life in modern Australia, and we’re not given to think about it much, except when we ponder whether we should be texting while we drive, or feel guilty about checking emails when we should really be listening to our partner.  This constant connectivity forms a huge feature of the landscape, a gravitational body which gently lures us toward it.

This connectivity creates a platform – just like a computer’s operating system – for running applications.  These applications aren’t software, they’re ‘peopleware’.  For example, fishermen off of India’s Kerala coast call around before they head into port, looking for the markets most in need of their catch.  Farmers in Kenya make inquiries to their local markets, looking for the best price for their vegetables. Barbers in Pakistan post a sign with their mobile number, buy a bicycle, and go clipper their clients in their homes.  The developing world has latched onto the mobile because it makes commerce fluid, efficient, and much more profitable.

If the mobile does that in India and Kenya and Pakistan, why wouldn’t it do the same thing for us, here in Australia?  It does lubricate our social interactions: no one is late anymore, just delayed.  But we haven’t used the platform to build any applications to leverage the brand-new fact of our constant connectivity.  We can give ourselves a pass, because we’ve only just gotten here.  But now that we are here, we need to think hard about how to use what we’ve got.  This is our hundred-million dollar moment.


II: Sharing is Daring

A few years ago, while I waited at the gate for a delayed flight out of San Francisco International Airport, I grew captivated with the information screens mounted above the check-in desks.  They provided a wealth of information that wasn’t available from airline personnel; as my flight changed gates and aircraft, I learned of this by watching the screen.  At one point, I took my mobile out of my pocket and snapped a photo of the screen, sharing the photo with my friends, so they could know all about my flying troubles.  After I’d shot a second photo, a woman approached me, and carefully explained that she was talking to another passenger on our delayed flight, a woman who worked for the US Government, and that this government employee thought my actions looked very suspicious.

Taking photos in an airport is cause for alarm in some quarters.

After I got over my consternation and surprise, I realized that this paranoid bureaucrat had a point. With my mobile, I was breaching the security cordon carefully strung around America’s airports.  It pierced the veil of security which hid the airport from the view of all except those who had been carefully screened.  We see this same sensitivity at the Immigration and Customs facilities at any Australian airport – numerous signs inform you that you’re not allowed to use your mobile.  Communication is dangerous.  Connecting is forbidden.

We tend to forget that sharing information is a powerful act, because it’s so much a part of our essential nature as human beings.

In November, Wikileaks shared a massive store of information previously held by the US State Department; just one among a quarter million cables touched off a revolt in Tunisia, leading to revolutions in Egypt, Bahrain, Yemen, Libya, Syria and Jordan.  Sharing changes the world.  Actually, sharing is the foundation of the human world.  From the moment we are born, we learn about the world because everyone around us shares with us what they know.

Suddenly, there are no boundaries on our sharing.  All of us, everywhere – nearly six billion of us – are only a string of numbers away.  Type them in, wait for an answer, then share anything at all.  And we do this.  We call our family to tell them we’re ok, our friends to share a joke, and our co-workers to keep coordinated.  We’ve achieved a tremendously expanded awareness and flexibility that’s almost entirely independent of distance.  That’s the truth at the core of this hundred-million dollar moment.

All of your clients, all of your patients, all of your stakeholders – and all of you – are all unbelievably well connected.  By the standards of just a generation ago, we are all continuously available.  Yet we still organize our departments and deliver our services as if everyone were impossibly far-flung, hardly ever in contact.

Still, the world is already busy, reorganizing itself to take advantage of all this hyperconnectivity.

I’ve already mentioned the fishermen and the farmers, but as I write this, I’ve just read an article titled “US Senators call for takedown of iPhone apps that locate DUI (RBT) checkpoints.”  You can buy a smartphone app which allows you to report on a checkpoint, posting that report to a map which others can access through the app.  You could conceivably evade the long arm of the law with such an app, drink driving around every checkpoint with ease.

Banning an app like this simply won’t work. There are too many ways to do this, from text messages to voice mail to Google Maps to smartphone apps.  There’s no way to shut them all down.  If the Senate passes a law to prevent this sort of thing – and they certainly will try – they’ll find that they’ve simply moved all of this connectivity underground, into ‘darknets’ which invisibly evade detection.

This is how potent sharing can be.  We all want to share.  We have a universal platform for sharing.  We must decide what we will share.  When people get onto email for the first time, they tend to bombard their friends and family with an endless stream of bad jokes and cute photographs of kittens and horribly dramatic chain letters.  Eventually they’ll back off a bit – either because they’ve learned some etiquette, or because a loved one has told them to buzz off.

You also witness that exuberant sharing in teenagers, who send and receive five hundred text messages a day.  When this phenomenon was spotted, in Tokyo, a decade ago, many thought it was simply a feature peculiar to the Japanese.  Today, everywhere in the developed world, young people send a constant stream of messages which generally say very little at all.  For them, it’s not important what you share; what is important is that you share it.  You are the connections, you are the sharing.

That’s great for the young – some have suggested that it’s an analogue to the ‘grooming’ behavior we see in chimpanzees – but we can wish for more than a steady stream of ‘hey’ and ‘where r u?’  We can share something substantial and meaningful, something salient.

That salience could be news of the nearest RBT checkpoint, or, rather more helpfully, it might be a daily audio recording of the breathing of someone suffering with Chronic Obstructive Pulmonary Disease.  It turns out that just a few minutes listening to the sufferer – at home, in front of a computer, or, presumably their smartphone – will cut their hospitalizations in half, because smaller problems can be diagnosed and treated before they become life-threatening.  A trial in Tasmania demonstrated this conclusively; it’s clear that using this connection to listen to the patient can save lives, dollars, and precious time.

This is the magic pudding, the endless something from nothing.  But nothing is ever truly free.  There is a price to be paid to realize the bounty of connectivity.  Our organizations and relations are not structured to advantage themselves in this new environment, and although it costs no money and requires no changes to the law, transforming our expectations of our institutions – and of one another – will not be easy.


III:  Practice Makes Perfect

To recap: Everyone is connected, everyone has a mobile, everyone uses them to maintain continuous connections with the people in their lives.  This brand-new hyperconnectivity provides a platform for applications.

The first and most natural application of connectivity is sharing, an activity beginning with the broad and unfocused, but moves to the specific and salient as we mature in our use of the medium.  This maturation is both individual and institutional, though at the present time individuals greatly outpace any institution in both their agility with and understanding of these new tools.

Our lives online are divided into two separate but unequal spheres; this is a fundamental dissonance of our era.  Teenagers send hundreds of text messages a day, aping their parents, who furiously respond to emails sent to their mobiles while posting Twitter updates.  But all of this is happening outside the institution,  or, in a best practice scenario, serves to reinforce the existing functionality of the institution.  We have not rethought the institution – how it works, how it faces its stakeholders and serves its clients – in the light of hyperconnectivity.

This seems too alien to contemplate – even though we are now the aliens.  We live in a world of continuous connection; it’s only when we enter the office that we temper this connection, constraining it to meet the needs of organizational process.

If we can develop techniques to bring hyperconnectivity into the organization, to harness it institutionally, we can bake that magic pudding.  Hyperconnectivity provides vastly greater capability at no additional cost.  It’s an answer to the problem.  It requires no deployment, no hardware, no budgeting or legislative mandates.  It only requires that we more fully utilize everything we’ve already got.

To do that, we must rethink everything we do.

Service delivery in health is something that is notoriously not scalable.  You must throw more people at a service to get more results.  All the technology and process management in the world won’t get you very far.  You can make systems more efficient, but you can’t make them radically more effective.  This has become such a truism in the health care sector that technology has become almost an ironic punchline within the field.  So much was promised, and so much of it consistently under-delivered, that most have become somewhat cynical.

There are no magic wands to wave around, to make your technology investments more effective.  This isn’t a technology-led revolution, although it does require some technology.  This is a revolution in relationship, a transformation from clients and customers into partners and participants. It’s a revolution in empowerment, led by highly connected people sharing information of vital importance to them.

How does this work in practice?  The COPD ‘Pathways‘ project in Tasmania points the way toward one set of services, which aim at using connectivity to monitor progress and wellness. Could this be extended to individuals with chronic asthma, diabetes, high blood pressure, or severe arthritis?  If one is connected, rather than separate, if one is in constant communication, rather than touching base for widely-spaced check-ins, then there will be a broad awareness of patient health within a community of carers.

The relationship is no longer one way, pointing the patient only to the health services provider.  It becomes multilateral, multifocal, and multiparticpatory.  This relationship becomes the meeting of two networks: the patient’s network of family, friends and co-afflicted, meeting the health network of doctors and nurses, generalists and specialists, clinicians and therapists.  The meeting of these two continuous always-on networks forms another continuity, another always-on network, focused around the continuity of care.

If we tried to do something like this today, with our present organizational techniques, the health service providers would quickly collapse under the burden of the additional demands on their time and connectivity required to offer such continuity in patient care.  Everything currently points toward the doctor, who is already overworked and impossibly time-poor.  Amplifying the connection burden for the doctor is a recipe for disaster.

We must build upon what works, while restructuring these relationships to reflect the enhanced connectivity of all the parties within the healthcare system.  Instead of amplifying the burden, we must use the platform of connectivity to share the load, to spread it out across many shoulders.

For example, consider the hundreds of thousands of carers looking after Australians with chronic illnesses and disabilities.  These carers are the front line.  They understand the people in their care better than anyone else – better even than the clinicians who treat them.  They know when something isn’t quite right, even though they may not have the language for it.

At the moment Australia’s carers live in a world apart from the various state health care systems, and this means that an important connection between the patient and that system is lacking.  If the carer were connected to the health care system – via a service that might be called CarerConnection – there would be better systemic awareness of the patient, and a much greater chance to catch emerging problems before they require drastic interventions or hospitalizations.

These carers, like the rest of Australia, already have mobiles.  Within a few years, all those mobiles will be ‘smart’, capable of snapping a picture of a growing rash, or a video of someone’s unsteady gait, ready to upload it to anyone prepared to listen.  That’s the difficult part of this equation, because at present the health care system can’t handle inquiries from hundreds of thousands of carers, even if it frees up doctor’s surgeries and hospital beds.

Perhaps we can employ nurses on their way to a gradual retirement – in the years beyond age 65 – to connect with the carers, using them to triage and elevate or reassure as necessary.  In this way Australia empowers its population of carers, creating a better quality of life for those they care for, and moves some of the burden for chronic care out of the health care system.

That kind of innovative thinking – which came from workshops in Bendigo and Ballarat – which shows the real value of connectivity in practice.  But that’s just the beginning.  This type of innovation would apply equally effectively to substance abuse recovery programs or mesothelioma or cystic fibrosis.  Beyond health care, it applies to education and city management as well as health service delivery.

This is good old-fashioned ‘people power’ as practiced in every small town in Australia, where everyone knows everyone else, looks out for everyone else, and is generally aware of everyone else.  What’s new is that the small town is now everywhere, whether in Camperdown or Bendigo or Brunswick, because the close connectivity of the small town has come to us all.

The aging of the Australian population will soon force changes in service delivery.  Some will see this as a clarion call for cutbacks, a ‘shock doctrine‘, rather than an opportunity to re-invent the relationships between service providers and the community.   This slowly unfolding crisis provides our generation’s best chance to transform practices to reflect the new connectivity.

It’s not necessary to go the whole distance overnight.  This is all very new, and examples on how to make connectivity work within healthcare are still thin on the ground.  Experimentation and sharing are the orders of the day.  If each regional area in Victoria started up one experiment – a project like CasConnect – then shared the results of that experiment with the other regions, there’d soon be a virtual laboratory of different sorts of approaches, with the possibility of some big successes, and, equally, the chance of some embarrassing failures.  Yet the rewards greatly outweigh any risks.

If this is all done openly, with patients and their community fully involved and fully informed, even the embarrassments will not sting – very much.

In order to achieve more with less, we must ask more of ourselves, approaching our careers with the knowledge that our roles will be rewritten.  We must also ask more of those who come forward for care.  They grew up in the expectation of one sort of relationship with their health services providers, but they’re going to live their lives in another sort of arrangement, which blurs boundaries and which will feel very different – sometimes, more invasive.  Privacy is important, but to be cared for means to surrender, so we must come to expect that we will negotiate our need for privacy in line with the help we seek.

The magic pudding isn’t really that magic. The recipe calls for a lot of hard work, a healthy dash of risk taking, a sprinkle of experiments, and even a few mistakes.  What comes out of the oven of innovation (to stretch a metaphor beyond its breaking point) will be something that can be served up across Victoria, and perhaps across the nation.  The solution lies in people connected, transformed into people power.

The Soul of Web 2.0

Introduction: In The Beginning

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful.  But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’.  Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century.  Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web.  Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real.  I was one of them.  From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu.  This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web.  It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source.  But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died.  The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released.  ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s.  When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits.  That’s the line for all of the traffic heading from one coast to the other.  I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase.  And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand.  And I wasn’t impressed.  In July 1993 very little content existed for the Web – just a handful of sites, mostly academic.  Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense.  I walked away from the computer that July afternoon wanting more.  Hypertext systems I’d seen before.  What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy.  Instead of a handful of sites, there were now hundreds.  There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list.  By Friday evening I was finished.  I had surfed the entire Web.  It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993.  Then things began to explode.

From October on I became a Web evangelist.  My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic.  That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities.  As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer.  “What’s something you’re interested in?” I’d ask.  They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits.  I’d click on one, watch the page load, and let them read.  “Wow!” they’d say.  “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos.  All I did was hook people by their own interests.  This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented.  The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount.  We tend to forget this, or overlook it, or just plain ignore it.  We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate.  It’s not that that we should ignore these considerations, but they are always secondary.  The Web is a ground for being.  Individuals do not present themselves as receptacles to be filled.  They are souls looking to be fulfilled.  This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web.  I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication.  I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing.  As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared.  The sharing instinct is innate and immediate.  We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it.  We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another.  It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby.  Everyone carries that hundred and fifty around inside of them.  Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with.  It’s automatic, requires no thought.  We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing.  Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten.  We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us.  It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained?  We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter.  You have to do more than request sharing.  You have to think through the entire goal of sharing, from the user’s perspective.  Are they sharing this because it’s interesting?  Are they sharing this because they want company?  Are they sharing this because it’s a competition or a contest or collaborative?  Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project.  What is it about the design of your work that excites them to share it with others?  Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential?  In other words, is there space only for one, or is there room to spread the word?  Why would anyone want to share your work?  You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question.  How will your work be shared?

Your works do not exist in isolation.  They are part of a continuum of other works?  Where does your work fit into that continuum?  How do the instructor and student approach that work?  Is it a top-down mandate?  Or is it something that filters up from below as word-of-mouth spreads?  How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected.  Is it simply via email – do all the students have email addresses?  Do they know the email addresses of their friends?  Or do you want your work shared via SMS?  A QRCode, perhaps?  Or Facebook or Twitter or, well, who knows?  And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today.  It becomes painfully obvious when it’s been overlooked.  For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend.  There was simply no way to do that.  (I don’t know if this has changed recently.)  That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes.  The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts.  Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd.  Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this.  Everyone’s there, but no one is wholly aware of anyone else’s presence.  You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another.  Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose.  These are the social networks: Facebook, MySpace, LinkedIn, and so on.  In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication.  But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless.  Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting.  There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths.  Where you can poll your friends on Facebook, on Twitter you can poll a planet.  How do I solve this problem?  Where should I eat dinner tonight?  What’s going on over there?  These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles.  It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections.  It’s a bit of both.  Where does your work benefit from a tight collective of connected individuals?  Is it some sort of group problem-solving?  A creative activity that really comes into its own when a whole band of people play together?  Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms?  When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity.  Not every project will need both kinds of connecting, but almost every one will benefit from one or the other.  We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed.  (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.)  Life is meaningful because we, together, give it meaning.  Life is bearable because we, together, bear the load for one another.  Human life is human connection.

The Web today is all about connecting.  That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it.  So how do your projects allow your users to connect?  Does your work leave them alone, helpless, friendless, and lonely?  Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic?  Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica.  That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun.  It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute.  There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy.  (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.)  By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer.  For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook.  This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia.  Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles.  Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us.  This is a powerful logic, an attraction which transcends the rational.  People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete.  If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia.  Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005.  With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights.  Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth.  It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought.  It will mean that we have captured the better part of human knowledge in a form accessible to all.  That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’.  It is a work-in-progress.  Google understands this and releases interminable beta versions of every product.  More than this, it means that nothing needs to offer all the answers.  I would suggest that nothing should offer all the answers.  Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you.  It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility.  There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done.  This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system.   The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up.  TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments.  RateMyProfessors.com is the holy terror of the academy in the United States.  Each of these websites has had to design systems which allow for users to self-regulate peer contributions.  In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation.  Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material.  TripAdvisor gives anonymous reviewers a lower ranking.  eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade.  Each of these are social solutions to social problems.

Web2.0 is not a technology.  It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil.  It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention.  Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration.  Nothing is ever complete, nor ever perfect.  The perfect is the enemy of the good, so if you wait for perfection, you will never release.  Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work.  In their more uncharitable moments, do they abuse the freedoms you have given them?  If so, how can you redesign your work, and ‘nudge’ them into better behavior?  It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem.  And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass.  Instead, release, observe, adapt, and re-release.  All releases are soft releases, everything is provisional, and nothing is quite perfect.  That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter.  Although they seem to be similar, they couldn’t be more different.  Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself.  If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page.  Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook.  Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach.  From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users.  Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data.  Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks.  People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks!  It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself.  Twitter has become a building block: when you write a program which needs to send a message, you use Twitter.  Facebook isn’t a building block.  It’s a monolith.

How do you build for openness?  Consider: another position the user might occupy is someone trying to use your work as a building block within their own project.  Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again?  Or is it opaque, seamless, and closed?  What about the data you collect, data the user has generated?  Where does that live?  Can it be exported and put to work in another application, or on another website?  Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful).  The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them.  If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’  You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user.  These are not precisely the same Web2.0 domains others might identify.  That’s because Web2.0 has become a very ill-defined term.  It can mean whatever we want it to mean.  But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another.  In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create.  We need to make room for them.  If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

Transforming Governance

My keynote address to the South Australian State Government conference, “The Digital Media Revolution”, in Adelaide, South Australia, 26 April 2008.

Posted in Uncategorized | 1 Reply

Why We Wiki


When I was a young man, I was obsessed by computers. I remember perfectly the first time I sat at a keyboard – at a “line printing” terminal, which had an endless sheet of paper spooling through it – to play a game of “Star Trek”. The fascination I felt at that moment has never really ended, nor the sense of wonder, or the desire to dive in and learn everything about this seemingly magical machinery. My timing was excellent; within a few years the first “microcomputers”, such as the Tandy TRS-80, came onto the market at affordable prices, and I could plumb the guts of computing with my very own machine.

This was incredibly fortuitous, because I was not a good student at University; or rather, I excelled at some classes and completely failed others. I had not yet learned the discipline to apply myself to unpleasant tasks (even today, nearly thirty years later, it presents difficulties), so my grades were a perfect reflection of my obsessions. If something interested me, I got As. Otherwise, well, my transcript speaks for itself. The University noted this as well, and politely asked me to “get lost” for a few years, until I had acquired the necessary discipline to focus on my education. That marked the end of my formal education, but that doesn’t mean I stopped learning. Far from it.

From my earliest years, I have been a sponge for information; my parents bought me the World Book Encyclopedia when I was six – twenty red-and-black leather-bound volumes, full of photographs and illustrations – and by the time I was eight, I’d read the whole thing. (I hadn’t memorized it, but I had read through it.) Once I discovered computers, I devoured anything I could find on the subject, in particular the January 1975 issue of Popular Electronics, which featured the MITS Altair 8800 – the world’s first microcomputer – on its cover. I dived in, learning everything about microcomputers: how they worked, how to program them, what they could be used for, until I had one of my own. Then I learned everything about that computer (the Tandy TRS-80), its CPU (the Zilog Z-80), experimented with programming it in BASIC and assembly language, becoming completely obsessive about it.

When I found myself tossed out of University, my obsession quickly turned into a job offer programming Intel 8080 systems (very similar to the Z-80), which led to a fifteen-year career as a software engineer, for which I was well paid, and within a field where my lack of University degrees in no way hindered my professional advancement. In the 1980s, nearly everyone working within microcomputing was an autodidact; almost none of these people had completed a university degree. I had the fortune to work with a few truly brilliant programmers in my earliest professional years, who mentored me in best programming practices. I learned from their own expertise as they transferred their wealth of experience and helped me to make it my own.

It is said that programming is more of a craft than a profession, in that it takes years of apprenticeship, working under masters of the craft, to reach proficiency. This is equally true of most professions: medicine, the law, even (or perhaps, especially) such arcana as synthetic chemistry. At its best, post-graduate education is a mentorship process which wires the obsessions of the apprentice to the wisdom of the master. The apprentice proposes, the master disposes, until the apprentice surpasses the master. The back-and-forth informs both apprentice and master; both are learning, though each learn different things.


Everyone is an expert. From a toddler, expert in the precarious balance of towering wooden blocks, to a nanotechnologist, expert in the precarious juxtaposition of atom against atom, everyone has some field of study wherein they excel – however esoteric or profane it might seem to the rest of us. The hows and whys of this are essential to human nature; we’re an obsessive species, and our obsessions can form around almost any object which engages our attentions. Most of these obsessions seem completely natural, in context: a Pitjandjara child learns an enormous amount about the flora and fauna of the central Australian desert, knows where to find water and shade, can recite the dreamings which place her within the greater cosmos. In the age before agriculture, all of us grew up with similar skills, each of us entirely obsessed with the world around us, because within that obsession lay the best opportunity for survival. Those of our ancestors who were most apt with obsession (up to a point) would thrive even in the worst of times, passing those behaviors (some genetic, some cultural) down through time to ourselves. But obsession is not a vestigial behavior; the entire bedrock of civilization is built upon it: specialization, that peculiar feature of civilization, where each assumes a particular set of duties for the whole, is simply obsession by another name.

A century ago, Jean Piaget realized that small children are obsessed with the physics of the world. Piaget watched as his own children struggled, inchoate, with elaborate hypotheses of causality, volume, and difference, constantly testing their own theories of how the world works, an operation as intent as any performed in the laboratory.

Language acquisition is arguably the most marvelous of all childish obsessions; in the space of just a few years – coincident with developments in the nervous system – the child moves from sonorous babbling into rich, flexible, meaningful speech – a process which occurs whether or not explicit instruction is given to the child. In fact, the only way to keep a child from learning language is to separate them from the community of other human beings. Even the banter of adults is enough for a child to grow into language.

Somewhere between early childhood and early adulthood the thick rope of obsession unwinds to a few mere threads. Most of us are not that obsessive, most of the time, or rather, our obsessions have shifted from the material to the immaterial. Adolescent girls become obsessive people-watchers, huddling together in cliques whose hierarchies and connections are so rich and so obscure as to be worthy of any hermetic cult. This process occurs precisely at the time their highest brain functions are realized, when they become acutely aware of the social networks within which they operate. Physics pales into insignificance when weighed against the esteem (or contempt) of one’s peers. This, too, is a survival mechanism: women, as the principle caregivers, need strong social networks to ensure that their offspring are well-cared for. Women who obsessively establish and maintain strong social deliver their children a decisive advantage in life, and so pass this behavior along to their children. Or so the thinking goes.


Mentoring is an embodied relationship, and does not scale beyond individuals. The sharing of expertise, on the other hand, has grown hand in hand with the printing press, the broadcast media, and the Web. Publishing and broadcasting both act as unintentional gatekeepers on the sharing of expertise; the costs of publishing a book (or magazine, or pamphlet), and the costs of broadcast spectrum set a lower limit on what specific examples of individual expertise make the transition into the public mind. For every Julia Child or Nigella Lawson, there are a thousand cooks who produce wonders from their kitchens; for every Simon Schama or David Halberstam, there are a thousand historians (most of whom are not white English-speakers) spinning tales of antiquity. These voices were lost to us, because they could not negotiate the transition into popularity. This is should not be read as a flat assessment of quality, but as a critique of the function of the market. Mass markets thrive on mass tastes; the specific is sacrificed for the broad. Yet the specific is often far more significant to the individual, containing within itself the quality of salience. Salience – that which is significant to us – is driven by our obsessions; things are salient because we are obsessed by them. The “salience gap” between the expertise delivered by the marketplace, and the burning thirst for knowledge of obsessed individuals has finally collapsed with the introduction of the Wiki.

At its most essential, a Wiki is simply a web page that is editable within a Web browser. While significant, that is not enough to explain why Wikis have unlocked humanity’s hidden and vast resources of expertise. That you can edit a web page in situ is less important than the goal of the editor. It took several years before it occurred to anyone that the editor could use a Wiki to share expertise. However, once that innovation occurred, it was rapidly replicated throughout the Internet on countless other Wikis.

Early in this process, Wikipedia launched and began its completely unexpected rise into utility. In some ways, Wikipedia has an easy job: as an encyclopedia it must provide a basic summary of facts, not a detailed exploration of a topic, and it is generally possible for someone with a basic background in a topic to provide this much information. Yet this critique overlooks the immense breadth of Wikipedia (as of this writing, nearly 2.3 million articles in its English-language version). By casting its net wide – inviting all experts, everywhere, to contribute their specific knowledge – not only has Wikipedia covered the basics, it’s also covering everything else. No other encyclopedia could hope to be as comprehensive as Wikipedia, because no group of individuals – short of the billion internet-connected individuals who have access to Wikipedia – could be so comprehensively knowledgeable on such a wide range of subjects.

Wikipedia will ever remain a summary of human knowledge; that is its intent, and there are signs that the Wikipedians are becoming increasingly zealous in their enforcement of this goal. Summaries are significant and important (particularly for the mass of us who are casually interested in a particular topic), but summaries do not satisfy our obsessive natures. Although Wikipedia provides an outlet for expertise, it does not cross the salience gap. This left an opening for a new generation of Wikis designed to provide depth immersion in a particular obsession. (Jimmy Wales, the founder of Wikipedia, realized this, and created Wikia.com as a resource where these individuals can create highly detailed Wikis.)

While one individual may have an obsession, it takes a community of individuals, sharing their obsession, to create a successful Wiki. No one’s knowledge is complete, or completely accurate. To create a resource useful to a broader community – who may not be as deeply obsessed – this “start-up” community must pool both their expertise and their criticism. Beginnings are delicate times, and more so for a Wiki, because obsessive individuals too often tie their identity to their expertise; questioning their expertise is taken as a personal affront. If the start-up community can not get through this first crisis, the Wiki will fail.

Furthermore, it takes weeks to months to get a sufficient quantity of expertise into a Wiki. A Wiki must reach “critical mass” before it has enough “gravitational attraction” to lure other obsessive individuals to the Wiki, where it is hoped they will make their own edits and additions to it. Thus, the start-up phase isn’t merely contentious, it’s also thankless – there are few visible results for all of the hard work. If the start-up community lacks discipline in equal measure to their forbearance, the Wiki will fail.

Given these natural barriers, it’s a wonder that Wikis ever succeed. The vast majority of Wikis are stillborn, but those which do succeed in attracting the attentions of the broader community of obsessive individuals cross the salience gap, and, in that lucky moment, the Wiki begins to grow on its own, drawing in expertise from a broad but strongly-connected social network, because individuals obsessed with something will tend to have strong connections to other similar individuals. Very quickly the knowledge within the community is immensely amplified, as knowledge and expertise pours out of individual heads and into the Wiki.

This phenomenon – which I have termed “hyperintelligence” – creates a situation where the community is smarter as a whole (and as individuals) because of their interactions with the Wiki. In short, the community will be more effective in the pursuit of its obsession because of the Wiki, and this increase in effectiveness will make them more closely bound to the Wiki. This process feeds back on itself until the idea of the community without the Wiki becomes quite literally unthinkable. The Wiki is the “common mind” of the community; for this reason it will be contentious, but, more significantly, it will be vital, an electronic representation of the power of obsession, an embodied form of the community’s depth of expertise.

What this community does with its newfound effectiveness is the open question.