The fourth dimension isn’t time: Reality, iOS 7 and Google Glass

Most of us using computing devices have seen three interfaces to interact with computers:

  • Command Line Instructions: often cryptic, requiring expertise, prior knowledge of the task and perhaps even some programming experience.  Extremely efficient for many tasks, providing a useful shorthand.  In certain applications, a menu of commands might have been available.
  • Mouse-driven desktop: which gave us the still-familiar desktop computer with windows, object icons, pull-down menus and buttons. This is what many people still imagine when they think of computers.  It is an attempt to create a virtual world that mimics the real office environment. A 1970s office since the ubiquity of computers has long since relegated actual filing cabinets,files, folders, and indeed desktops to the junk yard.
  • Multi-touch: which is now ubiquitous on our phones and tablets, along with it’s own peculiar sensibility of swipes, pinches, apps and haptic feedback.

Some would say voice-driven commands are the fourth interface, but I would argue they are a throw-back to command line instructions and not an new revolutionary interface.

What strikes me is that all three of these interface revolutions were first successfully developed for the consumer marketplace by one company; Apple.  The Times today has a fascinating memoir about the development of the first iPhone prototypes and the initial demo. The fourth interface revolution is upon us, and much as the first three were, its early iterations have been mocked and derided.

By now you’ve gathered why the sub-title mentions Google Glass.  Augmented reality is indeed the fourth interface revolution for consumer computing devices. The ability to add context to the real world without requiring direction from the user is almost here. The building blocks have been with us for some time.  GPS enabled devices providing fine-grained location information, gyroscopes to provide direction and velocity, and object recognition algorithms to interpret visual context have been combined to tell a computer where and what you’re looking at.  Mobile data communication and access to the global internet in our pockets let us retrieve contextual information on where we are and what we’re looking for.  Lightweight, highly mobile displays and voice feedback allow information can be presented to us at the appropriate level of obtrusiveness.  Motion sensors (think Microsoft Kinect) let us interact with the machine.

Google Glass is a very early iteration of augmented reality. Google is probably best positioned to deliver high-quality contextual data to an augmented reality system. It’s right in line with their mission to organize the world’s data.  But they aren’t the best at designing interfaces, and that’s what will make or break augmented reality. They’ve mad ea product that excites first-adopters and leaves the average person cold. Google has built the Newton, not the Palm.

You’re probably all wondering what iOS7 has to do with this discussion. Well, there is a chance that Apple has one more trick up its sleeve.  Apple has learned something about mapping and contextualizing the physical world during the development of Apple maps, even if the product is less than perfect. Purely as conjecture, let’s say you were designing an augmented reality system, perhaps it takes the form of a visual display as in Robocop, or perhaps it’s a transparent tablet like the ones carried around by the scientists on Avatar.  You would want all the augmented information and objects to stand out from reality, in fact you would want the interface to look as machine-like as possible so as not to be confused with real objects.  This would require backing away from any skeumorphism and adopting a flat feel with unnatural colors.  Icons and controls would have to stand-out when the background is a real scene, for instance using a heads-up display.  iOS7 has this translucent feel, much as you would imagine an augmented reality interface to have. If Apple does have a augmented reality display in the works, perhaps one that relies on the iPhone to drive it, iOS 7’s design would be a good way to introduce users to the new feel this interface will have.

Whoever puts together a successful augmented reality product will open up an enormous market.  The industrial and military applications are obvious, what will be more valuable in the long run is the ability to give users information on places, people, products and services as they engage with them in the real world. Vendors can expect price comparing shoppers to become far more common. Restauranteurs can expect consumer reviews to pop up on heads-up displays as pedestrians walk past. Some things, like pulling up someone’s Facebook page inconspicuously on your heads-up display when you first meet them in a bar will be creepy at first, and then completely commonplace. The internet in your pocket is about to enter your peripheral vision, and stay there.