Interacting with our handheld gadgets has always been an immersive experience. Devices like the Sony Walkman finally liberated us from our home stereo systems but, when combined with headphones, drew us into our own little mobile worlds. For some time afterward very little changed with input and output modalities. Buttons, knobs and dials worked– why mess with that? They functioned, for the most part, equally well for the sighted and sightless.
These days seeing may still be believing but touch, in the mobile computing world at least, is everything else. There’s an industrial design arms race going on, with the goal of providing users with fewer buttons to push– just one on the face of a flagship smartphone is now one too many.
The ongoing rush to nothing-but-touch ignores the vision-impaired, who are surely suffering unintended consequences of this relatively new device interaction paradigm. Physical keypads typically contain homing indicators that help them navigate the previous generation of cell phones. Audio augments this further, but can only help to a small extent alone. A disaster for users dependent on physical feedback?
Out on the hopefully not-to-distant horizon are practical haptic surfaces. Current touchscreen feedback mechanisms are limited to lightly buzzing a user’s fingertips. Nokia is working on implementations that can respond to applied pressure as well. And up ahead will be surfaces that can warp into the third dimension, creating dynamic protrusions and opening up a whole new world for the vision-impaired. At some point our engagement peripherals will move beyond vibrating game controllers to mice with buttons that emerge on demand from smooth surfaces. Texture and temperature, too, are useful feedback modes and I’m sure we can expect to see their use grow in novel ways as well.
We don’t need to necessarily build enhanced I/O functionality into devices. In the early 2000s we first saw the potential of turning just about any flat surface into an input controller. Projected virtual keyboards continued to represent the traditional in layout but promised more flexibility by “painting” ghostly buttons on our desks. For whatever reason, though, they have not really caught on for mainstream use.
Microsoft and Carnegie Melon University seem to think the desktop may have been the limiting factor. Their research is taking this idea mobile:
Soon you, too, will be able to talk to the hand. A new interface created jointly by Microsoft and the Carnegie Mellon Human Computer Interaction Institute allows for interfaces to be displayed on any surface, including notebooks, body parts, and tables. The UI is completely multitouch and the “shoulder-worn” system will locate the surface you’re working on in 3D space, ensuring the UI is always accessible. It uses a picoprojector and a 3D scanner similar to the Kinect.
As long as the projections are consistent, they could even be utilized by sight-impaired users relying on their own bodies or property to interact with the world around them. Couple such a technology with a GPS-enabled camera system capable of recognizing everyday objects, as well as the usual audio signallers, and even the totally blind can easily navigate unfamiliar territory.
With full-fledged computers being shrunk down to ridiculously small sizes and prices, we’re looking at an impending revolution in environmental information processing that will serve to liberate and engage more people in the near future. I can’t wait to see– and feel– what’s coming.
Latest posts by Randall "texrat" Arnold (see all)
- Math is Hard, part 1 - 16 March 2014
- No Stranger to Fiction - 3 March 2014
- Down-Voting Disqus, or Why Binary Feedback Mechanisms Matter - 1 March 2014