Kinect sensor bolted to an iRobot Create, starts looking for trouble

While there have already been a lot of great proof-of-concepts for the Kinect, what we’re really excited for are the actual applications that will come from it. On the top of our list? Robots. The Personal Robots Group at MIT has put a battery-powered Kinect sensor on top of the iRobot Create platform, and is beaming the camera and depth sensor data to a remote computer for processing into a 3D map — which in turn can be used for navigation by the bot. They’re also using the data for human recognition, which allows for controlling the bot using natural gestures. Looking to do something similar with your own robot? Well, the ROS folks have a Kinect driver in the works that will presumably allow you to feed all that great Kinect data into ROS’s already impressive libraries for machine vision. Tie in the Kinect’s multi-array microphones, accelerometer, and tilt motor and you’ve got a highly aware, semi-anthropomorphic “three-eyed” robot just waiting to happen. We hope it will be friends with us. Video of the ROS experimentation is after the break.

Continue reading Kinect sensor bolted to an iRobot Create, starts looking for trouble

Kinect sensor bolted to an iRobot Create, starts looking for trouble originally appeared on Engadget on Wed, 17 Nov 2010 21:38:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourcesquadbot (YouTube), ROS.org  | Email this | Comments

Tech Aids Reduce Driver Stress, Says MIT-Ford Study

MITStressTest_0085_web.jpg

If your car parks itself, you’ll lead a less stressful life, because technology-based driver assistance tools lower driver stress and increase safety, says a new MIT study sponsored by Ford. Automated parallel parking showed a reduction of 12 beats per minute in heart rate compared to parking manually, where a higher heart rate indicates elevated stress levels. In a second test, backing out of a confined space, drivers on their own sometimes missed crossing traffic and failed to stop, but never when the car’s cross-traffic alert system was enabled.

Inhabitat’s Week in Green: solar paper planes, Denmark’s flaming tower, and used coffee power

Each week our friends at Inhabitat recap the week’s most interesting green developments and clean tech news for us — it’s the Week in Green.

Green power lit up the world this week as ZenithSolar smashed the record for solar efficiency with its massive parabolic mirrors and Denmark unveiled plans to construct a towering “cathedral” that will transform waste into energy. We were also all abuzz about these batteries made from used coffee capsules and MIT rolled out a new type of foldable paper-thin solar cells.

Speaking of super-thin foldable tech, this week we showcased the world’s first biodegradable paper watch and we spotted an ingenious folding beer box that can transform a six-pack into a pitch-perfect xylophone. And while you’re working on that one, you can keep your rowdy kids busy with our Top 5 smart smartphone apps for kids that educate and entertain.

High-tech lighting was another hot topic this week as GE launched a new super-bright LED bulb that harnesses jet engine cooling techniques to cut its energy use. GE also flipped the switch on its funky new hybrid halogen-CFL light bulbs, and we saw San Diego blaze a trail for energy-efficient lighting as they unveiled plans to construct the nation’s largest interactive LED light show – on a bridge! Finally, we wrapped up this week’s Apple news with a look at a chic new laptop bag courtesy of vegan handbag company Matt & Nat.

Inhabitat’s Week in Green: solar paper planes, Denmark’s flaming tower, and used coffee power originally appeared on Engadget on Sun, 24 Oct 2010 20:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Turn Any Surface Into a Touchscreen Interface

LuminAR

While the whole tech world goes into speculation overdrive over the smallest shred of gossip about today’s Apple “back to the Mac” event, the real future of computers is on your coffee table. LuminAR is a project of MIT’s Fluid Interfaces Group which seeks to eradicate the traditional mouse-keyboard-screen interface. LuminAR utilizes a specialized Pico-projector that can project a computer interface onto any surface–coffee stains and all. The system is able to read your hand movements so as to form a working touchscreen interface–literally, anywhere.

If you ever dreamed of playing Tetris on your sleeping grandpa, the future is yours, my friend.

The full LuminAR set-up comes affixed to a robotic lamp arm that can find empty area to project an interface on or read hand gestures to project in a pre-programmed location (for example, if you want it to always show you videos on the wall as opposed to your desk, it can do that). But the actual LuminAR “bulb” can be removed from the robotic arm and placed in any conventional light socket, computer, or cell phone. It’s the ultimate augmented reality accessory to make the entire physical world into your personal computer desktop.

Video after the jump.

Grad Student Devises Method to use a Webcam to Diagnose Vital Signs

Have you ever wondered if the naked stranger on Chatroulette had too much cholesterol in their diet? Well, soon that worry will be a thing of the past thanks to one MIT Grad student. Ming-Zher Poh has devised a way to automatically (and accurately) read basic vital signs using technology as simple as a built-in laptop webcam.

The tech works by measuring and analyzing slight variations in brightness produced by the flow of blood through blood vessels in the face. When compared to a commercially-available, FDA-approved blood-volume pulse sensor, the system produced pulse rates that agreed  within three beats-per-minute.

Which is not bad for a dinky webcam. Conceivably, this technology could be developed into an app utilized by any smart phone with a camera.

In other real-world applications, Doctors could help diagnose patients around the globe via the internet.Vitals could be remotely detected in patients where the very process of taking readings might
be uncomfortable such as with burn victims or newborn babies. Poh has even put forward that this tech might one day be used in a bathroom mirror that could tell the mirror-gazer various vital signs including heart rate, blood pressure, and blood-oxygen levels. 

MIT Medical Lab Mirror tells your pulse with a webcam (video)

MIT Medical Lab Mirror tells your pulse with a webcam

Mirror mirror on the wall, who has the highest arterial palpation of them all? If you went to MIT you might be able to answer that question thanks to the work of grad student Ming-Zher Poh, who has found a way to tell your pulse with just a simple webcam and some software. By looking at minute changes in the brightness of the face, the system can find the beating of your heart even at a low resolution, comparable to the results of a traditional FDA-approved pulse monitor. Right now the mirror above is just a proof of concept, but the idea is that the hospital beds or surgery rooms of tomorrow might be able to monitor a patient’s pulse without requiring any wires or physical contact, encouraging news for anyone who has ever tried to sleep whilst wearing a heart monitor.

Continue reading MIT Medical Lab Mirror tells your pulse with a webcam (video)

MIT Medical Lab Mirror tells your pulse with a webcam (video) originally appeared on Engadget on Thu, 07 Oct 2010 11:12:00 EDT. Please see our terms for use of feeds.

Permalink Switched  |  sourceMITnews  | Email this | Comments

The Internet of Cars: New RD for Mobile Traffic Sensors

When we talk about “the internet of things,” we usually begin with commercial and household applications — tracking inventory, or a lost remote. But one future of networked objects might be in public information and infrastructure: the internet of cars.

For four years, MIT’s CarTel project has been tracking the driving patterns of GPS-equipped taxis in metro Boston. The research team, led by computer scientists Hari Balakrishnan and Sam Madden, thinks we can stop spotting traffic jams after the fact with news helicopters or roadside sensors by equipping cars themselves with position sensors and wireless connections. They’ve developed a new software algorithm that optimizes information-sharing between multiple nodes on a network, when those nodes are on the move, drifting in and out of close contact with one another.

Equipping cars with position and network technology has several advantages over traditional traffic-tracking methods. It’s already here, in the form of on-board GPS systems and the RFID fobs city car-sharing programs use to track cars and give multiple drivers access to vehicles. It’s less expensive than helicopters, and less static than fixed roadside sensors. Finally, news organizations and planners can see traffic tie ups as or even before they happen, rather than after the fact.

There are potential privacy concerns. Why should I allow the Department of Transportation, my local news team, or any entity to track my movements? Collection of this information would have to be closely regulated, highly encrypted, and strictly anonymized — perhaps even initially restricted to public and publically licensed vehicles likes public transit, cabs, police/fire/rescue vehicles, or cars and trucks owned by local government. The whole point is that when it comes to plotting traffic patterns, tracking unique users simply doesn’t matter.

But the potential upsides are tremendous. Having better knowledge of actual traffic patterns could help urban planners improve their transportation infrastructure, from retiming traffic lights to restructuring bus routes. It could help first responders and ordinary drivers avoid potential tie-ups.

Researchers at Ford and Microsoft are sufficiently intrigued. They plan to test the MIT researchers’ algorithm and network design in future versions of Sync, the Redmond-designed, Detroit-implemented automotive communication and entertainment system.

Image and video from Ford Motor Company

See Also:


Inhabitat’s Week in Green: Honeycomb skyscrapers, solar funnels, and the Karma PHEV supercar

Each week our friends at Inhabitat recap the week’s most interesting green developments and clean tech news for us — it’s the Week in Green.

High tech architecture took the spotlight this week as Aedas unveiled a set of photovoltaic crystalline honeycomb skyscrapers for Abu Dhabi and San Francisco unfurled plans for a sail-shaped solar stadium for the America’s Cup yacht race. We also took an exclusive look inside a high-tech solar home that actually produces more energy than it consumes, and spotted a new technology that can transform any home’s electrical wiring into an information-transmitting antenna.

We also showcased some of the world’s most efficient vehicles as the winners of the $10 million Automotive X-Prize were announced, and we were excited to hear that the first factory-built Fisker Karma supercar will be rolling up to the Paris Auto Show next month. Finally, we peek inside Jay Leno’s envy-inducing green garage in this week’s episode of Green Overdrive.

In other news, MIT made waves on the renewable energy front as they revealed a new “solar funnel” technology that could increase the efficiency of photovoltaic cells 100 times. We also took a first look at Eddy GT’s new streamlined city-friendly wind turbine, and we saw Tesla batteries jump-start residential solar systems by storing excess energy.

Inhabitat’s Week in Green: Honeycomb skyscrapers, solar funnels, and the Karma PHEV supercar originally appeared on Engadget on Sun, 19 Sep 2010 21:20:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

The Singularity. In Opera Form. With Robots.

Opera.png

MIT’s Media Lab creates all sorts of neat stuff. Futuristic robot sort of stuff. Jetsons stuff. And now the program is combining the technology of the future with a cue from the past when they premiere their first opera, Death and the Powers.

The one-act opera, which was 10 years in the making, will make its premiere September 24-26 in Monte Carlo, Monaco. Composer and Media Lab professor Tod Machover, who has created technology-infused instruments for Yo-Yo Ma and Prince, conceptualized Death and the Powers as a way for technology and music to compliment each other. The production features a human cast alongside an animated multi-media set design which includes nine life-sized singing “OperaBots.”

The story is straight Singularity mythology. (If you didn’t know, The Singularity is the hip new geek religion, if you’re reading GearLog, you’d probably be interested in joining our cult. It’s run by Google. Really.) Death is the tale of a plucky mad scientist who uploads his memories and personality into “The System.” The actor portraying the scientist disappears after the first scene and is thereafter represented by various robots, lights, and assorted fixtures that make-up the on-stage “System.” Off-stage, the singer’s performance is captured by software that monitor’s his volume and pitch, as well muscle tension and breathing patterns and reflect those attributes into on-stage mechanisms of The System. Machover doves this use of technology as “disembodied performance.” It’s like if T-Pain had an unlimited budget and a dedicated staff of research grads.

It sounds amazing. I can’t wait to check it out. It’s going to be the Avatar of the opera world. In the meantime, I hope Professor Machover will next use technology to improve upon ballet, which is awful. Just awful.

Some video of the production after the jump.

Your Lost Gadgets Will Find Each Other

Graphic by Christine Daniloff, via MIT News Office

Sometimes when one of my remotes is missing, I interrogate the others: “Where’s your friend? I know you know something!” In the future, with wireless positioning systems, a version of that method might actually almost work.

Researchers at MIT’s Wireless Communications and Network Sciences Group think networks of devices that communicate their positions to each other will work better than all of the devices transmitting to a single receiver. The latter is how GPS works, and if you’ve used it, you know it isn’t always very precise. In the lab, MIT’s robots can spot a wireless transmitter within a millimeter.

This seems almost intuitive: the more “eyes” you have on an object, the easier it is to triangulate — the robot version of “the wisdom of crowds.” But the key conceptual breakthrough here isn’t actually the number of transmitters or their network arrangement, but what they’re transmitting. MIT News’s Larry Hardesty writes:

Among [the research group’s] insights is that networks of wireless devices can improve the precision of their location estimates if they share information about their imprecision. Traditionally, a device broadcasting information about its location would simply offer up its best guess. But if, instead, it sent a probability distribution — a range of possible positions and their likelihood — the entire network would perform better as a whole. The problem is that sending the probability distribution requires more power and causes more interference than simply sending a guess, so it degrades the network’s performance. [The] group is currently working to understand the trade-off between broadcasting full-blown distributions and broadcasting sparser information about distributions.

Much of this research is still theoretical, or has only been deployed in lab settings. But Princeton’s H. Vincent Poor is optimistic about the MIT group’s approach: “I don’t see any major obstacles for transferring their basic research to practical applications. In fact, their research was motivated by the real-world need for high-accuracy location-awareness.” Like precisely which cushion my remote control is underneath.

Warning: Very Dry Flash Video Of Robots Finding Things Follows

See Also: