Thought-control research brings mental channel changing ever closer

Pinky and the Brain don’t get nearly the respect they deserve, but then again, neither do the lab coat-wearing boffins who make great strides behind sterilized doors to bring us one step closer to mass laziness. The latest development in the everlasting brain control saga takes us to the University of Washington, where a team of researchers are carefully studying the differences between doing an action and simply imagining the action. So far, they’ve discovered that interacting with brain-computer interfaces enables patients to create “super-active populations of brain cells.” Naturally, this finding holds promise for rehabilitating patients after stroke or other neurological damage, but it also suggests that “a human brain could quickly become adept at manipulating an external device such as a computer interface or a prosthetic limb.” Or a remote control, or a Segway, or a railgun. We can’t speak for you, but we certainly dig where this is headed. Video of the findings is after the break.

Continue reading Thought-control research brings mental channel changing ever closer

Thought-control research brings mental channel changing ever closer originally appeared on Engadget on Wed, 24 Feb 2010 01:53:00 EST. Please see our terms for use of feeds.

Permalink ExtremeTech  |  sourceUniversity of Washington  | Email this | Comments

Gesture Cube Responds To Waving Your Hand

gesturecube

Could gesture recognition become the successor to touchscreen? And if it does, what would it be like to use it to interact with our gadgets.

A prototype design shows a cube-shaped device that can be used to access music, look up recipes and flick through photos.

The idea called Gesture Cube senses hand movements made close to the screen and translates them into commands for the device.  It’s an user interface idea for the next generation of digital devices, says German company Ident, whose technology powers the device.

The cube has sensors that detect the approach of a hand and transmit the coordinates to the electronics. Functions such as pulling up the playlist or activating the browser can then be assigned to the co-ordinates. Finally, touching a switch or button finally activates the task.

For now, the idea is the concept stage. But with the interest in gesture recognition, it’s to see that this idea could find a way into real world devices soon.

Check out the video to see the Gesture Cube concept at work.

Photos: Gesture Cube

[via GizmoWatch]


Gesture Cube, the magical, intuitive, theoretical 3D interface (video)

You know how it is — another day, another “magical” and “intuitive” input device — not unlike Immersion’s Cubtile, which we first saw about a year ago. This time around the culprit is Gesture Cube, the heathen spawn of Ident’s “GestIC” electric field sensing technology (for 3D spatial movement tracking) and a couple German design studios. GestIC detects movements and distances in 3D space, enabling touch free gesture control. If this sounds good to you, wait until you see the YouTube demonstration, complete with all sorts of “magical” and “intuitive” interface ideas! It will really make you with you were a designer living in Germany, starring in YouTube videos for “magical” and “intuitive” design firms. We don’t know how much of a hurry we are to see this implemented in our fave hardware, but who knows? Maybe we’ll come around eventually — after all, Grippity did wonders for our words-per-minute. Video after the break.

Continue reading Gesture Cube, the magical, intuitive, theoretical 3D interface (video)

Gesture Cube, the magical, intuitive, theoretical 3D interface (video) originally appeared on Engadget on Thu, 04 Feb 2010 15:23:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceGesture Cube  | Email this | Comments

Winter Olympics to Demo Thought-Controlled Lighting

thought-computing-installation

Along with the figure skating, ice hockey and snowboarding, another event will compete for attention at the Winter Olympics in Canada this month.

A Canadian company has created what it calls the “largest thought-controlled computing installation.” It’s an experiment that lets visitors to the Olympics use their brainwaves to control the lights at three major landmarks in Canada, including Niagara Falls.

“When people put on the headsets and find themselves increasing the brightness of the lights by just thinking about it, you can almost see their brains explode,” says Trevor Coleman, chief operating officer for InteraXon, the company that has created this installation.

As consumers get more comfortable with going beyond the keyboard and the mouse to interact with their computers, companies are looking for alternate ways to make the experience better. Already, touch and voice recognition have become a major part of the user interface in smartphones, and harnessing brainwaves or other biological data is slowly emerging as a third option, especially in gaming. Companies such as NeuroSky offer headphones that promise to translate the gamer’s brainwaves into action on screen. A biometrics company called Innerscope is helping Wired host a geeked-out Super Bowl party. And even Microsoft is working on alternate forms of input; its Project Natal promises to add gesture recognition to Xbox 360 games later this year.

InteraXon’s installation is spread across three sites: Toronto’s CN Tower, Ottawa’s Parliament Buildings and Niagara Falls. All three locations have two chairs set up, each with its own headset. The headsets have an external probe that touches the wearer’s forehead to measure the baseline brain activity. The chairs are rigged to offer tactile feedback as users enter the desired brain state.

The headset measures the brain’s electrical output and reacts to alpha waves, associated with relaxation, and beta waves, which indicate concentration. As users relax or focus their thoughts, the computer sends a message to the site they are viewing. InteraXon’s software translates users’ thoughts to commands that will change the lighting display. For instance, by concentrating, users can make the lights at the CN Tower spin faster or change the brightness of the lights at Niagara Falls.

It’s easy enough once users get started, says Coleman.

“To achieve the beta state we ask users to focus on things like an object ahead and its details, while for an alpha response we ask them to take a deep breath and relax to let their mind go,” he says. “But after a minute or two of trying it, we found most users no longer require the physical cues,” says Coleman.

Over the two weeks that the exhibit will be open, InteraXon expects more than 2,000 visitors to try it out.

See Also:

Photo: Markos Tesome


Thin Film Turns Any Surface Into a Touchscreen

displaxscreen3

Turning your monitor into a touchscreen could some day be as simple as peel … and stick.

Displax, a Portugal-based company, promises to turn any surface — flat or curved — into a touch-sensitive display. The company has created a thinner-than-paper polymer film that can be stuck on glass, plastic or wood to turn it into an interactive input device.

“It is extremely powerful, precise and versatile,” says Miguel Fonseca, chief business officer at Displax. “You can use our film with on top of anything including E Ink, OLED and LCD displays.”

Human-computer interaction that goes beyond keyboards and mouse has become a hot new area of emerging technology. Since Apple popularized the swipe and pinch gestures with the iPhone, touch has become a new frontier in the way we interact with our devices.

In the past, students have shown a touchscreen where pop-up buttons and keypads can dynamically appear and disappear. That allows the user to experience the physical feel of buttons on a touchscreen. In 2008, Microsoft offered Surface, a multitouch product that allows users to manipulate information using gesture recognition.

Displax’s films range from 3 inches to 120 inches diagonally.

“If Displax can do this for larger displays, it will really be one of the first companies to do what we call massive multitouch,” says Daniel Wigdor, a user experience architect for Microsoft who focuses on multitouch and gestural computing. “If you look at existing commercial technology for large touch displays, they use infrared camera that can sense only two to four points of contact. Displax takes us to the next step.”

Displax’ latest technology works on both opaque and transparent surfaces. The films have a 98 percent transparency — a measure of the amount of light that is reflected through the surface. “That’s a pretty decent transmission rate,” says Wigdor.

A grid of nanowires are embedded in the thin polymer film that is just about 100 microns thick. A microcontroller processes the multiple input signals it receives from the grid. A finger or two placed on the screen causes an electrical disturbance. The microcontroller analyzes this to decode the location of each input on that grid. The film comes with its own firmware, driver — which connect via a USB connection — and a control panel for user calibration and settings.

Currently, it can detect up to 16 fingers on a 50-inch screen. And the projective capacitance technology that Displax uses is similar to that seen on the iPhone, so the responsiveness of the touch surface is great, says Fonseca.

And if feeling around the screen isn’t enough, Displax allows users to interact with the screen by blowing on it. Displax says the technology can also be applied to standard LCD screens.

Displax’s versatility could make it valuable for a new generation of displays that are powering devices such as e-readers. For instance, at the Consumer Electronics Show last month, Pixel Qi showed low-power displays that can switch between an active color LCD mode and an e-reader-like, low-power black-and-white mode. Pixel Qi’s displays, along with other emerging display technologies from the likes of Qualcomm’s Mirasol  and E Ink’s color screen are keenly awaited in new products because they promise to offer a good e-reader and a netbook in a single device.

But touch is a feature that is missing in these emerging displays. Displax could help solve that problem.

It is also more versatile than Microsoft Surface, says Fonseca. “Our film is about 100 microns thick, while Surface is about 23 inches deep,” he says. “So we can slip into any hardware. Surface cannot be used with LCD screens so that can be a big limiting factor.”

The comparisons to Surface may not be entirely fair, says Wigdor. “Surface is not just another hardware solution,” he says. “It includes integrated software applications and vision technology so it can respond to just the shape of the object.”

Still he says, Displax’s thin film offers a big breakthrough for display manufacturers because it they don’t have to make changes to their manufacturing process to use it. Displax says the first screens featuring its multitouch technology will start shipping in July.

Photo: Displax

See Also:


The iPad’s Interface and Gestures: What’s Actually New (Video)

The iPad is a gargantuan iPhone, perhaps more precisely than many hoped. But, if you look closely, you can see hints of what’s truly coming next.

There are a few new scraps of gestures and interface bits, all thanks to the larger screen, which you can see sprinkled throughout the keynote video:

True multi-finger multitouch
Two finger swipes, three finger twirls—multitouch gestures that weren’t really possible on the iPhone’s tiny screen, unless you’re a mouse. This is what people were excited about, and we only get a taste. Though, the gesture Phil uses to drag multiple slides in Keynote, using two hands, looks a bit awkward and belabored.

Popovers
The most significant new UI element of the iPad vs. the iPhone are popovers, which you see all over the place when you need to dive further into the interface, or make a choice from a list (since blowing up lists to full screen size doesn’t make a whole lot of sense now). A box pops up, and has a list of choices or options, which might take you down through multiple levels of lists, like you see in the demo of Numbers, with selecting functions to calculate. Gruber has more on popovers, and why they’re significant, here.

Media Navigator
In some ways, the media navigator Phil Schiller shows off in iWork is the most interesting bit to me: That’s what Apple sees as replacing a file browser in this type of computer. It’s a popover too, technically.

Long touches and drags
Lots of touch, hold and drag, something you didn’t see much of in the iPhone. With more UI elements, and layers of them, you need a way of distinguishing what type of motion action you’re trying to engage.

These are all pretty basic, so far, building right on top of the iPhone’s established interface, but it points to the future: More fingers, more gestures, more layered UI elements and built-in browsers.

DisplayPort 1.2 receives final VESA blessing, grows into a real standard

VESA might’ve been a bit tardy with finalizing it, but DisplayPort v1.2 is now all official and it comes with an impressive tally of numbers to get your attention. Doubling the data throughput of v1.1a (from 10.8Gbps to 21.6Gbps), the latest version will be able to support multiple monitors via only a single output cable, allowing you to daisy-chain up to four 1920 x 1200 monitors, for example. It can also perform bi-directional data transfer, which will permit USB hubs, webcams, and touchscreen panels integrated into displays to communicate over the same cable as the video signal. Backwards compatibility with older peripherals is assured, but you’ll naturally need a v1.2-capable computer to exploit all this newfound goodness. You’ll find the full PR after the break.

Continue reading DisplayPort 1.2 receives final VESA blessing, grows into a real standard

DisplayPort 1.2 receives final VESA blessing, grows into a real standard originally appeared on Engadget on Mon, 18 Jan 2010 04:55:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

To Scroll, Take a Deep Breath and Blow

Zyxio

New user interfaces such as touch and voice recognition are trying to change how we interact with computers.

But how about controlling devices with just your breath? To scroll, pucker up your mouth and blow steadily. To click, blow a forceful puff like you are trying to put out a candle.

CES 2010“We blow at stuff all the time — blow candles, blow bubbles, blow at dust,” says Pierre Bonnat, CEO of Zyxio, a company that is creating “breath-enabled interfaces.” Zyxio showed its idea at the Consumer Electronics Show in Las Vegas.

Wacky as the idea may be, Zyxio promises to have it in products this year.


The popularity of touchscreens has led human computer interaction beyond the traditional mouse and keyboard. Researchers are trying to find “natural” ways of interacting with computers so devices can move beyond the home and office. Voice recognition, for instance, lets users dictate commands to their devices rather than click buttons.

Zyxio’s system has a single MEMS (Micro Electro-Mechanical System) chip that senses pressure levels in the open space, at a distance of up to 7.8 inches (20 centimeters) from the mouth.

“The MEMS is small, unobtrusive and capable of recognizing a few Pascals (a unit of pressure),” says Bonnat, citing a common unit of pressure. “If you cough or shake it, it doesn’t react.”

The breath-analyzing sensor can be integrated into any hardware, including headsets, mobile phones and laptops. The sensor can detect kinetic energy and movement caused by the expulsion of human breath can generate an electrical, optical or magnetic signal. This signal is communicated to a processing module, which — with the help of the company’s algorithm — translates it into a command that can be recognized by the computer.

The algorithm picks up gusts intentionally generated by the user and discards surrounding breeze.

“70 percent of the technology is in the software,” says Bonnat. “The MEMS is just the enabler.”

Blowing puffs of air with enough precision to get the cursor on a laptop screen to exactly where you want is easier and more intuitive than you think. But there is definitely a learning curve.

That shouldn’t hold up the idea, says Bonnat. The mind can direct the mouth to blow in the direction it wants, he says. For proof, watch a 5-year old blow out just a few candles out on a cake. A Zyxio video shows how the breath interface can control a laptop.

Importantly, the breath-enabled interface isn’t designed for detailed interactions, says Bonnat, who imagines that you’ll use it instead to quickly scroll pages of information at an information kiosk, or to answer a call or turn off the radio in a car without doing anything more difficult than blowing a quick puff of air.

The Zyxio MEMS system will start shipping in the second quarter of the year, says Bonnat. Among the first products to use it will be a gaming headset.

Photo: Pierre Bonnat, Zyxio CEO, controls a laptop using his breath. Photo by Priya Ganapati/Wired.com.


Boxee Box interface demo video

We’re already gone hands-on with the Boxee Box and its sweet QWERTY RF remote, but now that we know there’s a dual-core Tegra 2 in there it’s time for a little interface demo with founder Avner Ronen. First things first: yes, it ran Hulu in the browser — but the network connection on the show floor was acting up, so we couldn’t demo it very well. Avner tells us the built-in browser IDs itself as essentially standard Mozilla, so we’ll have to see if Hulu goes out of its way to block it –it’s definitely still possible, but it’ll take some work. Apart from that minor drama, we’ve got to say we’re incredibly impressed — the interface was lightning fast, the remote’s keyboard felt great, and we’re liking the Facebook / Twitter integration, which mines your feeds for videos posted by your friends and displays them on the home page. Avner tells us he thinks D-Link will be “aggressive” with that under-$200 price point when the Box launches in Q2, and there’ll be tons of content partners at launch. Video after the break!

Continue reading Boxee Box interface demo video

Boxee Box interface demo video originally appeared on Engadget on Fri, 08 Jan 2010 18:41:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Gestural Computing Breakthrough Turns LCD Into a Big Sensor

mit-gestural-computing

Some smart students at MIT have figured out how to turn a typical LCD into a low-cost, 3-D gestural computing system.

Users can touch the screen to activate controls on the display but as soon as they lift their finger off the screen, the system can interpret their gestures in the third dimension, too. In effect, it turns the whole display into a giant sensor capable of telling where your hands are and how far away from the screen they are.

“The goal with this is to be able to incorporate the gestural display into a thin LCD device like a cell phone and to be able to do it without wearing gloves or anything like that,” says Matthew Hirsch, a doctoral candidate at the Media Lab who helped develop the system. MIT, which will present the idea at the Siggraph conference on Dec. 19.

The latest gestural interface system is interesting because it has the potential to be produced commercially, says Daniel Wigdor, a user experience architect for Microsoft.

“Research systems in the past put thousands of dollars worth of camera equipment around the room to detect gestures and show it to users,” he says. “What’s exciting about MIT’s latest system is that it is starting to move towards a form factor where you can actually imagine a deployment.”

Gesture recognition is the area of user interface research that tries to translate movement of the hand into on-screen commands. The idea is to simplify the way we interact with computers and make the process more natural. That means you could wave your hand to scroll pages, or just point a finger at the screen to drag windows around.

MIT has become a hotbed for researchers working in the area of gestural computing. Last year, an MIT researcher showed a wearable gesture interface called the ‘SixthSense’ that recognizes basic hand movements.

But most existing systems involve expensive cameras or require you to wear different-colored tracking tags on your fingers. Some systems use small cameras that can be embedded into the display to capture gestural information. But even with embedded cameras, the drawback is that the cameras are offset from the center of the screen and won’t work well at short distances. They also can’t switch effortlessly between gestural commands (waving your hands in the air) and touchscreen commands (actually touching the screen).

The latest MIT system uses an array of optical sensors that are arranged right behind a grid of liquid crystals, similar to those used in LCD displays. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.

By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.

In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.

The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.

The idea is so novel that MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.

For now, Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares.

The bi-directional screens from MIT can be manufactured in a thin, portable package that requires few additional components compared with LCD screens already in production, says MIT. (See video below for an explanation of how it works.)

Despite the ease of production, it will be five to ten years before such a system could make it into the hands of consumers, cautions Microsoft’s Wigdor. Even with the hardware in hand, it’ll take at least that long before companies like Microsoft make software that can make use of gestures.

“The software experience for gestural interface systems is unexplored in the commercial space,” says Wigdor.

Photo/Video: MIT