What was the name of that movie? You know, the one with Tom Cruise?
“Minority Something”. Report, yes – “Minority Report”. He stood in front of this huge transparent screen and sort of dragged photos and video around like this (waves hand) and then he pulled the corners to make them bigger and then he…..
Spielberg apparently based that iconic sequence on conversations with Microsoft, so perhaps it is no surprise that Hollywood’s one-time virtual reality is now nearly real.
Microsoft recently created “Surface” a table-based computer with a horizontal screen that combines multiple, simultaneous touch inputs, gesture recognition, object and tag recognition and advanced graphics – and, yes, you can drag and resize objects like in the film. Surface is clearly targeted at multi-user interactivity: bars and restaurants for interactive ordering and playing; retail outlets for interactive catalogues and the corporate world for presentations and briefings. Powerpoint presentations will never be the same again.
These applications of Surface are rich in “ooh, that’s clever” moments, impressing with design and the user experience.
Apple’s Iphone brought gesture recognition to the consumer’s pocket and, almost overnight, the ubiquitous desktop and laptop looked slightly old-fashioned. There are, however, millions of personal computer users in the world and almost every one of them uses a keyboard and mouse.
Enter Windows 7 and Apple’s Snow Leopard, which support the growing number of multi-touch devices on the market. Windows 7 brings pinch-to-zoom and tap-and-drag control to monitors, overlays and laptops while Snow Leopard supports similar gestures using mice and track-pads.
But users are comfortable with their software working in a certain way – simple, point-click control of pull-down menus which have almost become standardised. It will take something very special to change that. So, while multi-touch technology is clearly suited to tablet computers and smartphones, it remains to be seen if it can find a use in homes and, especially, offices.
Once again, the latest Iphone is a trail-blazing example of “augmented reality”. Point it at a street scene and the built-in compass will overlay heading and directions on the camera’s image. Soon it, and devices like it, will overlay information about the buildings around you, recognise faces in the street and allow us to interact with our environment in ways we haven’t even thought of.
Nintendo’s Wii brought a form of virtual reality to the console gaming market with advanced gesture control, proving that the sheer immersive joy of realistic interaction – such as swinging a virtual tennis racket – is at least as important to the mass market as sumptuous, high-resolution graphics, a victory for function (and fun) over style.
So what is the future for user interfaces? I rather think that there isn’t just one future. Office computers will continue to develop using desk-bound mice and keyboards as hand-held devices and laptops evolve towards gesture-based control, offering innovative ways to interact with their environment.
Multi-touch screens will become common-place in the home, in the hand and for multi-user devices such as Surface if, and only if, the design is good enough to last beyond those “ooh that’s clever” moments and doesn’t, once the novelty has worn off, interfere with functionality.
Of course, we all hope that the future of user interfaces is much closer to science fiction. We want projected holographic images (“Help me Obi-Wan, you’re my only hope”) and virtual-reality headsets (whatever happened to them?). We want computers to react to our eye movements or our thoughts but all this, as well as Tom Cruise’s screen is, for now, still science fiction.
But probably not for long.