MIT’s Copenhagen Wheel turns your bike into a hybrid, personal trainer

You really can’t fault MIT’s branding strategy here. Debuting at the biggest climate change conference since Kyoto, its Copenhagen Wheel is a mixture of established technologies with the ambition to make us all a little bit greener and a little bit more smartphone-dependent. On the one hand, it turns your bike into a hybrid — with energy being collected from regenerative braking and distributed when you need a boost — but on the other, it also allows you to track usage data with your iPhone, turning the trusty old bike into a nagging personal trainer. The Bluetooth connection can also be used for conveying real time traffic and air quality information, if you care about such things, and Copenhagen’s mayor has expressed her interest in promoting these as an alternative commuting method. Production is set to begin next year, but all that gear won’t come cheap, as prices for the single wheel are expected to match those of full-sized electric bikes. Video after the break.

Continue reading MIT’s Copenhagen Wheel turns your bike into a hybrid, personal trainer

MIT’s Copenhagen Wheel turns your bike into a hybrid, personal trainer originally appeared on Engadget on Wed, 16 Dec 2009 07:22:00 EST. Please see our terms for use of feeds.

Permalink inhabitat  |  sourceMIT  | Email this | Comments

Gestural Computing Breakthrough Turns LCD Into a Big Sensor

mit-gestural-computing

Some smart students at MIT have figured out how to turn a typical LCD into a low-cost, 3-D gestural computing system.

Users can touch the screen to activate controls on the display but as soon as they lift their finger off the screen, the system can interpret their gestures in the third dimension, too. In effect, it turns the whole display into a giant sensor capable of telling where your hands are and how far away from the screen they are.

“The goal with this is to be able to incorporate the gestural display into a thin LCD device like a cell phone and to be able to do it without wearing gloves or anything like that,” says Matthew Hirsch, a doctoral candidate at the Media Lab who helped develop the system. MIT, which will present the idea at the Siggraph conference on Dec. 19.

The latest gestural interface system is interesting because it has the potential to be produced commercially, says Daniel Wigdor, a user experience architect for Microsoft.

“Research systems in the past put thousands of dollars worth of camera equipment around the room to detect gestures and show it to users,” he says. “What’s exciting about MIT’s latest system is that it is starting to move towards a form factor where you can actually imagine a deployment.”

Gesture recognition is the area of user interface research that tries to translate movement of the hand into on-screen commands. The idea is to simplify the way we interact with computers and make the process more natural. That means you could wave your hand to scroll pages, or just point a finger at the screen to drag windows around.

MIT has become a hotbed for researchers working in the area of gestural computing. Last year, an MIT researcher showed a wearable gesture interface called the ‘SixthSense’ that recognizes basic hand movements.

But most existing systems involve expensive cameras or require you to wear different-colored tracking tags on your fingers. Some systems use small cameras that can be embedded into the display to capture gestural information. But even with embedded cameras, the drawback is that the cameras are offset from the center of the screen and won’t work well at short distances. They also can’t switch effortlessly between gestural commands (waving your hands in the air) and touchscreen commands (actually touching the screen).

The latest MIT system uses an array of optical sensors that are arranged right behind a grid of liquid crystals, similar to those used in LCD displays. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.

By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.

In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.

The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.

The idea is so novel that MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.

For now, Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares.

The bi-directional screens from MIT can be manufactured in a thin, portable package that requires few additional components compared with LCD screens already in production, says MIT. (See video below for an explanation of how it works.)

Despite the ease of production, it will be five to ten years before such a system could make it into the hands of consumers, cautions Microsoft’s Wigdor. Even with the hardware in hand, it’ll take at least that long before companies like Microsoft make software that can make use of gestures.

“The software experience for gestural interface systems is unexplored in the commercial space,” says Wigdor.

Photo/Video: MIT


MIT gestural computing makes multitouch look old hat

Ah, the MIT Media Lab, home to Big Bird’s illegitimate progeny, augmented reality projects aplenty, and now three-dimensional gestural computing. The new bi-directional display being demoed by the Cambridge-based boffins performs both multitouch functions that we’re familiar with and hand movement recognition in the space in front of the screen — which we’re also familiar with, but mostly from the movies. The gestural motion tracking is done via embedded optical sensors behind the display, which are allowed to see what you’re doing by the LCD alternating rapidly (invisible to the human eye, but probably not to human pedantry) between what it’s displaying to the viewer and a pattern for the camera array. This differs from projects like Natal, which have the camera offset from the display and therefore cannot work at short distances, but if you want even more detail, you’ll find it in the informative video after the break.

[Thanks, Rohit]

Continue reading MIT gestural computing makes multitouch look old hat

MIT gestural computing makes multitouch look old hat originally appeared on Engadget on Fri, 11 Dec 2009 05:31:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceMIT  | Email this | Comments

MIT Wins DARPA Balloon Challenge

DARPA_Balloon_Challenge.jpg

A group of MIT students won DARPA’s $40,000 Network Challenge by being the first to submit the locations of 10 moored, red, 8-foot weather balloons at 10 fixed locations across the continental U.S. The team accomplished the goal in just less than nine hours, sorting through tons of misinformation floating around the Internet on Facebook, Twitter, and other sites.

The Washington Post reports that the winning team, headed by post-doc Riley Crane, set up an information-gathering pyramid that assigned each balloon an award of $4,000, the first person to spot one $2,000, and less money to people who referred the various informants down the chain. The team will donate the rest of the award money to charity.

Harvard and MIT researchers working to simulate the visual cortex to give computers true sight

It sounds like a daunting task, but some researchers at Harvard and MIT have banded together to basically “reverse engineer” the human brain’s ability to process visual data into usable information. However, instead of testing one processing model at a time, they’re using a screening technique borrowed from molecular biology to test a range of thousands of models up against particular object recognition tasks. To get the computational juice to accomplish this feat, they’ve been relying heavily on GPUs, saying the off-the-shelf parallel computing setup they’ve got gives them hundred-fold speed improvements over conventional methods. So far they claim their results are besting “state-of-the-art computer vision systems” (which, if iPhoto’s skills are any indication, wouldn’t take much), and they hope to not only improve tasks such as face recognition, object recognition and gesture tracking, but also to apply their knowledge back into a better understanding of the brain’s mysterious machinations. A delicious cycle! There’s a video overview of their approach after the break.

[Thanks, David]

Continue reading Harvard and MIT researchers working to simulate the visual cortex to give computers true sight

Harvard and MIT researchers working to simulate the visual cortex to give computers true sight originally appeared on Engadget on Fri, 04 Dec 2009 09:29:00 EST. Please see our terms for use of feeds.

Permalink   |  sourcePhysOrg  | Email this | Comments

MIT researchers develop liquid metal battery for the grid and the home

We’ve see plenty of green power research over the years, from solar plants to underwater turbines , but relying on the sun or the sea for electricity is not without its challenges: the sun doesn’t always shine, for instance, and sometimes the water is calm. A group at MIT led by professor Donald Sadoway is developing grid-scale storage solutions for times when electricity isn’t being generated. Since these batteries are intended for the power grid instead of cellphones and Roombas, the researchers can use materials not feasible in consumer electronics — in this case, high temperature liquid metals. Besides being recently awarded a grant from ARPA-E (Advanced Research Projects Agency, Energy) to put these things in green power facilities, MIT has just embarked on a joint venture with the French oil company Total to develop a smaller-scale version of the technology for homes and office buildings.

MIT researchers develop liquid metal battery for the grid and the home originally appeared on Engadget on Fri, 20 Nov 2009 11:47:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceMIT News  | Email this | Comments

Digital ‘Cloud’ could form over London for the 2012 Olympics

No, we’re not talking about “the cloud” where data goes to disappear and (hopefully) be retrieved again. We’re talking about an actual (well, artificial) cloud that promises to be both a real structure and a massive digital display. That’s the bright idea of a team of researchers from MIT, anyway, and it’s now been shortlisted in a competition designed to find a new tourist attraction to be built in London for the 2012 Olympics. Dubbed simply “The Cloud,” the structure would consist of two 400-foot tall mesh towers that are linked by a series of interconnected plastic bubbles, which would themselves house an observation deck inside and be used to display everything from Olympic scores and highlights to a “barometer of the city’s interests and moods” outside (that latter bit comes courtesy of the group’s partnership with Google). As if that wasn’t enough, the whole thing also promises to be funded entirely by micro-payments from the public (which would also determine its final size), and be completely self-powered, with it relying on a combination of solar power and regenerative braking from the lifts in the towers. Video after the break.

[Via Inhabitat]

Continue reading Digital ‘Cloud’ could form over London for the 2012 Olympics

Filed under:

Digital ‘Cloud’ could form over London for the 2012 Olympics originally appeared on Engadget on Thu, 12 Nov 2009 02:45:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Sixth Sense creator to release code, wearable gesture interface becomes a reality for all

If we’re being honest (and trust us, we’re being honest), Pranav Mistry’s Sixth Sense contraption has always baffled us. It’s kind of like Sony’s Rolly. It looks cool, it sounds rad, but we’re fairly certain only 2.49 people actually know and fully comprehend how it works. That said, we’re more than jazzed about the possibility of having wearable gesture interfaces gracing every human we come into contact with, and rather than attempting to make his invention “comply with some kind of corporate policy,” he’s purportedly aiming to release the source code into the wild in order to let “people make their own systems.” Nice guy, huh? All told, the Sixth Sense can be built for around $350 (plus oodles of unpaid time off), and we’re pretty certain that a few talented DIYers can get this thing whipped into shape far quicker than Mega Corp X. So, how’s about a release date for that code?

[Via AboutProjectors]

Filed under: ,

Sixth Sense creator to release code, wearable gesture interface becomes a reality for all originally appeared on Engadget on Sat, 07 Nov 2009 10:12:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

MIT’s Affective Intelligent Driving Agent is KITT and Clippy’s lovechild (video)

If we’ve said it once, we’ve said it a thousand times, stop trying to make robots into “friendly companions!” MIT must have some hubris stuck in its ears, as its labs are back at it with what looks like Clippy gone 3D, with an extra dash of Knight Rider-inspired personality. What we’re talking about here is a dashboard-mounted AI system that collects environmental data, such as local events, traffic and gas stations, and combines it with a careful analysis of your driving habits and style to make helpful suggestions and note points of interest. By careful analysis we mean it snoops on your every move, and by helpful suggestions we mean it probably nags you to death (its own death). Then again, the thing’s been designed to communicate with those big Audi eyes, making even our hardened hearts warm just a little. Video after the break.

Continue reading MIT’s Affective Intelligent Driving Agent is KITT and Clippy’s lovechild (video)

Filed under:

MIT’s Affective Intelligent Driving Agent is KITT and Clippy’s lovechild (video) originally appeared on Engadget on Fri, 30 Oct 2009 04:48:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Pixel Qi e-ink / LCD hybrid display to debut on tablet next month?

It’s been far, far too long (read: four months) since we’ve heard a peep from the gentle souls over at Pixel Qi, but it looks like the long, heart-wrenching wait for the hybrid display that’s bound to revolutionize Western civilization is nearing an end. According to the startup’s CEO herself, Mary Lou Jepsen, the primetime-ready 3Qi display should make its glorious debut on an undisclosed tablet to be announced next month. For those out of the loop, this transflective display contains both e-ink and LCD properties, one for outdoor reading scenarios and the other for multimedia viewing. The amazing part is that toggling between the two is as simple as flipping a switch, which obviously means great things for battery life on whatever device it’s shoved into. We’ll be keeping our eyes peeled for more, but do us a favor and cross your fingers for good luck. Toes too, por favor.

[Thanks, Tom]

Filed under:

Pixel Qi e-ink / LCD hybrid display to debut on tablet next month? originally appeared on Engadget on Sat, 17 Oct 2009 15:11:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments