Authors: Tom Chatfield
This is where games come in. Games companies, after all, spend literally billions of dollars on ensuring that their products are easy to use and accessible to a degree inconceivable in most corporate products. No one is being paid to play a game, and so the designers must ensure that at every stage users are drawn in and trained to understand the game systems without even noticing. There are no double clicks in game worlds, and no incomprehensible menus; at least, not in successful titles, whose creators’ philosophies tend to be that anyone who’s interested ought to be guided by the gradual structure of a game into knowing exactly how to use an often highly complicated set of skills.
As Wortley puts it, ‘emerging interface technologies are largely being driven by the games industry; and it’s this that will ultimately change the way that people interact with technology’. And ‘emerging interface technologies’ include, in the case of products like the Nintendo Wii or the
Guitar Hero
games (which use a plastic guitar as their principal control method), highly innovative ways of interacting with computers that lie well outside the traditional bounds of a keyboard, mouse and monitor. Wortley himself confesses, ‘When I play games, it’s mainly
Guitar Hero
for me. Prior to that, I always found the console interface unfriendly to someone of my generation. But I saw this game in PC World, thought it was worth trying and got completely hooked. This combination of the way it uses the interface and has a great design and is incredibly intuitive just to pick up and play is a fantastic example of best practice for anyone: a balance of accessibility, challenge, reward, making people want to progress and develop their skills, to learn and train.’
Games have long been one of the world’s most important engines for computing innovation – along with, more recently, the mobile phone. It’s largely thanks to the ever-evolving ambitions of game designers that modern computers have a DVD drive, a graphics card, decent sound capability, a staggering amount of RAM, a large colour monitor, and so on. None of this, technically, is required for word processing or even for producing presentations; the multi-media PC is very much a child of gaming, and has been since its youngest days. Now, though, with the power and speed of even inexpensive modern computers at an unprecedented level in historical terms, games designers have begun to turn to perfecting the field of access and interface design – to help as many people as possible to perform complex tasks on a machine in a manner that is engaging and intuitive.
Consider the standard physical interface between a user and a computer. In an age where the phone in most pockets is smarter than the computer that put men on the moon, the keyboards we type on are essentially identical not only to those used on the very first home computers, but to nineteenth-century typewriters. It’s a bit like using reins to drive an F1 car. Even the mouse has hardly changed in more than twenty-five years. Keyboards and mice are still with us because they work very well, if you know how to use them, and because of a momentum within the computing industry itself: like the arrangement of letters on a keyboard, they have become too familiar and ubiquitous simply to sweep away – attempts to transform or replace them in the past have invariably foundered. Yet, in recent years, the serious possibility of an interface revolution has begun to arise, thanks almost entirely to advances made in the gaming sector (and thanks to its need to woo its audience with pleasure rather than bludgeon it with obligations).
Motion sensitivity, above all in the form of Nintendo’s controllers for its Wii console, is the most successful mass market innovation to have come from this field, but much more radical devices are not far behind. One especially impressive example is a device known as a NeuroSky. Worn like an elaborate pair of headphones, it allows the user to control an electronic device with the power of their mind – purely by concentrating, and without moving physically in any way. It may sound like science fiction, yet it is already available to purchase for around $200, and can be used off the shelf: users simply attach the headset, start concentrating, and let the ‘dry neural sensor technology’ read and interpret activity in the brain through the skin.
Perhaps inevitably, the first application to show off the NeuroSky was a delightfully tacky-looking game known as the
Star Wars Force Trainer
, in which a sphere can be moved up and down within a ‘training tower’ simply by thinking about it (as demonstrated at a recent games conference by a large man in a Darth Vader outfit). As ever in the history of computing, if a company wants to engage an audience’s imagination and show off the interactive potentials of a technology to its best ability, a game is an unrivalled demonstration tool.
Aside from brain waves, less than $200 could also buy a Novint Falcon controller, which replaces the standard mouse with a ping-pong ball-sized device, attached by three supporting arms to a series of sensors and motors housed in a weighted base. It looks like a slightly dangerous toy robot, but the Novint Falcon is in fact one of an increasing number of ‘haptic devices’ on the market: interfaces that provide direct physical feedback from what’s onscreen, such as giving a sense of recoil to in-game guns or allowing users to experience the weight of virtual objects. The Novint Falcon was developed for the mass market in the gaming sector, which remains its main source of revenues, but it actually began its existence as a tool to help doctors perform examinations remotely. In combination with other forms of motion sensitivity, the potential for serious as well as entertainment uses for such innovations is vast. This is true, too, of those less able to use the traditional keyboard and mouse, either because of inexperience, age, infirmity or disability; already, it’s arguable that Nintendo’s Wii has done more to bring the power of computing to new kinds of users than anything since the birth of the internet itself.
In addition to all these innovations, there’s the equally enticing possibility of revolutionising not just how we interact with what’s on a screen, but how the contents of that screen appear to us. One enterprising graduate student, Johnny Chung Lee, at Carnegie Mellon University, has, for instance, already demonstrated that an adapted Wii controller can be combined with a computer, television and two LED lights to function as an extremely effective virtual reality display. The LED lights are worn on an adapted pair of goggles attached to your head, allowing the sensor to track your motion precisely within a room. The computer then adapts the image on the television depending on where you’re standing, meaning that the screen functions exactly as though you were looking through a window into another three-dimensional world. It’s an uncanny, and extremely convincing, effect, achievable at negligible cost.
Beyond physical interfaces and appearances, the potential uses of persistent virtual worlds extend well beyond the mere provision of expertise and innovation for other industries. Wortley uses the example, here, of ‘intelligent shared spaces’. These entail equipping a physical space, such as an office building, with an array of sensors and monitors that allow its environment to be visualised and managed in real time within a virtual world. Effectively, you have a virtual office whose temperature, lighting, appliances and so on you can manipulate to your heart’s content on a computer screen – while all the interactive variables you see, and all the changes you choose to make, are instantly reflected in the actual place.
The technology, Wortley explains, allows designers ‘to build intelligence into physical spaces like buildings, so that when you enter somewhere, the building is capable of recognising you, knowing something about you and your interests, how you use the building, what people you are connected to, what you are interested in seeing’. Because everything can be visualised and managed in real time, it allows people the kind of control over – and understanding of – complicated real environments that they have previously extended to virtual ones. If there’s one thing that games have demonstrated over the last thirty years, it’s that people have an extraordinary aptitude for managing the use of resources within real-time systems, so long as they have suitably clear data, visuals and interfaces – something the games industry has an unrivalled expertise in providing.
This kind of set-up is no pipe dream; it’s already being made a reality by people like Swiss entrepreneur Oliver Goh, who’s working in partnership with the Coventry Serious Games Institute to bring intelligent shared spaces to life. Goh’s product uses a combination of sensor technologies and three-dimensional visualisation to create a system known as ‘OpenShaspa’, which allows you, as his website explains, to ‘monitor and maintain your real life environment via mobile, web, or a 3D space’. Perhaps the simplest and most enticing application of Goh’s technology to date is something called the OpenShaspa Home Energy Kit, which displays in real time a household’s usage of water, gas and electricity, broken down by individual rooms and appliances within a virtual model of the building. At the click of a mouse, from thousands of miles away (or from the computer in your living room), you can use it to adjust and refine the state of every device in a home – and instantly see the changes to your energy costs. The OpenShaspa kit even includes a ‘Social Energy Meter’, which makes all your data publicly available online via systems like Google and Facebook; other people can then track, analyse and compare energy patterns across, potentially, local communities, states or even nations.
For many people desperately searching for practical, effective ways of helping ordinary consumers to engage with such crucial yet conceptually daunting environmental issues as energy usage, the kind of lessons this hybrid real/virtual space might yield are invaluable ones. In late 2008, for instance, at the second annual conference on behaviour, energy and climate change in Sacramento, California, Professor Byron Reeves of Stanford University proposed a ‘World of Greencraft’ scenario. Why stop at the idea of serious games helping people to change their thinking, he argued, when you could go one better than this and turn a householder’s own domestic energy consumption into the driving force behind an MMO?
Reeves, an expert in how people process media, produced a demonstration video that showed ‘smart’ electricity meters in people’s homes providing real-time data for virtual versions of their homes online. All these online homes were located within a shared game world in which people could log on as avatars and see each other’s virtual dwellings. The game involves incen-tivising players to compete over having the most energy-efficient virtual home, with the only way to reduce the energy costs of a virtual home being to bring down the energy usage of the actual home. It’s a task made easier by in-game information about peak hours of usage, wasteful rooms, appliances and habits, careless oversights, and so on – and the emphasis in the demonstration is firmly on a
Warcraft-like
spirit of competitive cooperation, with local areas teaming up to beat their rivals in having the best statistics.
Reeves’s scheme suggests a powerful way of motivating people to take on challenges that have traditionally proved hard to conceptualise, a concept that is sometimes referred to as the art of ‘gaming’ a particular task or set of ideas. As with many fields, there’s little that’s fundamentally new about the essential behaviour or techniques involved. People have been ‘gaming’ life in the pursuit of fun and profit for decades, even for centuries: if you create a fun, rewarding activity within a certain context, you make that context more appealing. From collecting toys in cereal packets to gathering air miles via credit card purchases, it’s possible to give an activity ‘hooks’ – a metaphor that perfectly describes the process of snaring and reeling in users. What video games bring to this field is an unprecedented amount of both information and sophistication. It’s an area that few people know better than the American author and researcher Amy Jo Kim, a doctor of behavioural neuroscience and an expert in online community architecture whose work focuses especially on applying game design principles to the wider world.
Jo Kim’s approach combines an analysis of our ‘primal response patterns’ with, once again, the notion of ‘flow’ – the idea that the right combination of response, challenge and applied skill can induce a heightened, pleasurable state of immersion. She lists five essential mechanisms in the basic gaming process: collecting, points, feedback, exchanges and customisation. One particular case study used by Jo Kim to describe how these features can operate outside of a game setting is YouTube, currently ranked the world’s third most popular website after Google and Yahoo! (making it the most-visited web destination in the world unrelated to search). YouTube is in no sense a game, and yet it effortlessly ticks off each of the five gaming mechanisms in turn.
First, collecting: the moment you’ve created a YouTube account, it’s made as easy as possible for you to start gathering up a personal list of ‘favourites’ that will be displayed to everyone else using the website as part of your public identity, and that allow you to personalise your online space within the site so that, the moment you log on from anywhere in the world, you have at your fingertips everything of value that you’ve gathered thus far in your use of the site.
Second, points: a simple but absolutely crucial motivator, given the increasingly staggering importance of ‘YouTube views’ as a public index of the interest and import of any video clip; plus there’s the star rating given by users to videos, which provides another incentive for posting and interacting.
Third, feedback: comments posted by other users are an essential part of the atmosphere and the ‘stickiness’ of the site and, along with the logging of every visit and recommendation, allow people to feel they are part of a dynamic community that can offer both rewards and rebukes to users.
Fourth, exchanges: these are embodied in the feature allowing users to post video responses to other videos, as well as the widespread re-editing and often wholesale reconstruction of other users’ videos, turning individual hits into memes that go on being exchanged, repeated, augmented, parodied and paid tribute to for years after they were first posted to the community.