Gestural Computing Breakthrough Turns LCD Into a Big Sensor
Posted in: interface, mit, R&D and Inventions, Today's Chili, uiSome smart students at MIT have figured out how to turn a typical LCD into a low-cost, 3-D gestural computing system.
Users can touch the screen to activate controls on the display but as soon as they lift their finger off the screen, the system can interpret their gestures in the third dimension, too. In effect, it turns the whole display into a giant sensor capable of telling where your hands are and how far away from the screen they are.
“The goal with this is to be able to incorporate the gestural display into a thin LCD device like a cell phone and to be able to do it without wearing gloves or anything like that,” says Matthew Hirsch, a doctoral candidate at the Media Lab who helped develop the system. MIT, which will present the idea at the Siggraph conference on Dec. 19.
The latest gestural interface system is interesting because it has the potential to be produced commercially, says Daniel Wigdor, a user experience architect for Microsoft.
“Research systems in the past put thousands of dollars worth of camera equipment around the room to detect gestures and show it to users,” he says. “What’s exciting about MIT’s latest system is that it is starting to move towards a form factor where you can actually imagine a deployment.”
Gesture recognition is the area of user interface research that tries to translate movement of the hand into on-screen commands. The idea is to simplify the way we interact with computers and make the process more natural. That means you could wave your hand to scroll pages, or just point a finger at the screen to drag windows around.
MIT has become a hotbed for researchers working in the area of gestural computing. Last year, an MIT researcher showed a wearable gesture interface called the ‘SixthSense’ that recognizes basic hand movements.
But most existing systems involve expensive cameras or require you to wear different-colored tracking tags on your fingers. Some systems use small cameras that can be embedded into the display to capture gestural information. But even with embedded cameras, the drawback is that the cameras are offset from the center of the screen and won’t work well at short distances. They also can’t switch effortlessly between gestural commands (waving your hands in the air) and touchscreen commands (actually touching the screen).
The latest MIT system uses an array of optical sensors that are arranged right behind a grid of liquid crystals, similar to those used in LCD displays. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.
By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.
In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.
The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.
The idea is so novel that MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.
For now, Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares.
The bi-directional screens from MIT can be manufactured in a thin, portable package that requires few additional components compared with LCD screens already in production, says MIT. (See video below for an explanation of how it works.)
Despite the ease of production, it will be five to ten years before such a system could make it into the hands of consumers, cautions Microsoft’s Wigdor. Even with the hardware in hand, it’ll take at least that long before companies like Microsoft make software that can make use of gestures.
“The software experience for gestural interface systems is unexplored in the commercial space,” says Wigdor.
Photo/Video: MIT
Post a Comment