Invisible iPhone prototype puts the ‘hand’ back in ‘handset’ (video)

Not too long ago, the invisible iPhone was nothing more than satirical fodder for the Onion. Now, Patrick Baudisch and his team of researchers at the Hasso-Plattner Institute have moved closer to making it a reality, with a new interface that can essentially transfer an iPhone touchscreen to the palm of your hand. The device involves an Xbox-like depth camera, mounted on a tripod, that can register the movements of a person’s finger across his or her palm. Special software then determines the actions these gestures would execute on a user’s iPhone, before transmitting the commands to a physical phone, via WiFi radio. Unlike MIT’s motion-based “sixth sense” interface, Baudisch’s imaginary phone doesn’t require users to learn a new dictionary of gestures, but relies solely on the muscle memory that so many smartphone users have developed. During their research, Baudisch and his colleagues found that iPhone owners could accurately determine the position of two-thirds of their apps on their palms, without even looking at their device. At the moment, the prototype still involves plenty of bulky equipment, but Baudisch hopes to eventually incorporate a smaller camera that users could wear more comfortably — allowing them to answer their imaginary phones while doing the dishes and to spend hours chatting with their imaginary friends. Head past the break to see the prototype in action.

Continue reading Invisible iPhone prototype puts the ‘hand’ back in ‘handset’ (video)

Invisible iPhone prototype puts the ‘hand’ back in ‘handset’ (video) originally appeared on Engadget on Mon, 23 May 2011 09:05:00 EDT. Please see our terms for use of feeds.

Permalink MIT Technology Review  |  sourceHasso-Plattner Institute(1)  | Email this | Comments

US lags in broadband adoption and download speeds, still has the best rappers

US Ranks #9

U, S, A! We’re number nine! Wait, nine? At least according to a recent broadband survey by the FCC, yes. The good ol’ US of A ranked ninth (out of the 29 member countries of the Organization for Economic Co-operation and Development) in fixed broadband penetration on a per capita basis, and 12th in terms of pure percentage — behind the UK, South Korea, Iceland, the Netherlands, and plenty of others. Though, granted, these nations lack the sprawling amber waves of grain that America must traverse with cables. The US also trailed in wireless broadband adoption, ranking ninth yet again, behind the likes of Ireland, Australia and Sweden. Worse still, even those with broadband reported slower connections than folks in other countries. Olympia, Washington had the highest average download speeds of any US city with 21Mbps (New York and Seattle tied for second with 11.7Mbps), but was easily topped by Helsinki, Paris, Berlin, and Seoul (35.8Mbps). Well, at least we beat Slovenia… if only just barely.

US lags in broadband adoption and download speeds, still has the best rappers originally appeared on Engadget on Sat, 21 May 2011 18:01:00 EDT. Please see our terms for use of feeds.

Permalink Reuters  |  sourceFCC  | Email this | Comments

Paralyzed man can stand and walk again, thanks to spinal implant

Here’s an amazing story to end your week on a high note: a 25-year-old paraplegic is now walking again, thanks to a groundbreaking procedure developed by neuroscientists at the University of Louisville, UCLA and Cal Tech. The Oregon man, Rob Summers, was paralyzed below the chest in 2006, after getting hit by a speeding car. This week, however, doctors announced that Summers can now stand up on his own and remain standing for up to four minutes. With the help of a special harness, he can even take steps on a treadmill and can move his lower extremities for the first time in years. It was all made possible by a spinal implant that emits small pulses of electricity, designed to replicate signals that the brain usually sends to coordinate movement. Prior to receiving the implant in 2009, Summers underwent two years of training on a treadmill, with a harness supporting his weight and researchers moving his legs. This week’s breakthrough comes after 30 years of research, though scientists acknowledge that this brand of epidural stimulation still needs to be tested on a broader sample of subjects before any definitive conclusions can be drawn. Summers, meanwhile, seems understandably elated. “This procedure has completely changed my life,” the former baseball player said. “To be able to pick up my foot and step down again was unbelievable, but beyond all of that my sense of well-being has changed.” We can only imagine.

Paralyzed man can stand and walk again, thanks to spinal implant originally appeared on Engadget on Fri, 20 May 2011 08:47:00 EDT. Please see our terms for use of feeds.

Permalink MedicalXpress  |  sourceUniversity of Louisville  | Email this | Comments

Key pattern analysis software times your typing for improved password protection

The recent pilfering of PlayStation Network passwords and personal info shows that having a strong passcode doesn’t always guarantee your online safety. However, key-pattern analysis (KPA) software from researchers at American University of Beirut may be able to keep our logins secure even if they’re stolen. You create a unique profile by entering your password a few times while the code tracks the speed and timing of your keystrokes. The software then associates that data to your password as another means of authentication. Henceforth, should the magic word be entered in a different typing tempo, access is denied. We saw a similar solution last year, but that system was meant to prevent multiple users from accessing subscription databases with a single account. This KPA software allows multiple profiles per password so that your significant other can still read all your email — assuming you and your mate reside in the trust tree, of course.

Key pattern analysis software times your typing for improved password protection originally appeared on Engadget on Fri, 20 May 2011 00:29:00 EDT. Please see our terms for use of feeds.

Permalink Gizmag  |  sourceInternational Journal of Internet Technology and Secured Transactions  | Email this | Comments

Asius’ ADEL earbud balloon promises to take some pressure off your poor eardrums

Listener fatigue: it’s a condition that affects just about everyone who owns a pair of earbuds and one that myriad manufacturers have tried to mitigate with various configurations. According to researchers at Asius Technologies, though, the discomfort you experience after extended periods of earphone listening isn’t caused by faulty design or excessively high volumes, but by “acoustic reflex.” Every time you blast music through earbuds, your ear muscles strain to reduce sound waves by about 50 decibels, encouraging many audiophiles to crank up the volume to even higher, eardrum-rattling levels. To counteract this, Asius has developed something known as the Ambrose Diaphonic Ear Lens (ADEL) — an inflatable polymer balloon that attaches to the ends of earbuds. According to Asius’ Samuel Gido, the inflated ADEL effectively acts as a “second eardrum,” absorbing sound and redirecting it away from the ear’s most sensitive regions. No word yet on when ADEL may be available for commercial use, but head past the break for a video explanation of the technology, along with the full presser.

Continue reading Asius’ ADEL earbud balloon promises to take some pressure off your poor eardrums

Asius’ ADEL earbud balloon promises to take some pressure off your poor eardrums originally appeared on Engadget on Wed, 18 May 2011 14:31:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceGizmag  | Email this | Comments

Lingodroid robots develop their own language, quietly begin plotting against mankind

It’s one thing for a robot to learn English, Japanese, or any other language that we humans have already mastered. It’s quite another for a pair of bots to develop their own, entirely new lexicon, as these two apparently have. Created by Ruth Schulz and her team of researchers at the University of Queensland and Queensland University of Technology, each of these so-called Lingodroids constructed their special language after navigating their way through a labyrinthine space. As they wove around the maze, the Lingobots created spatial maps of their surroundings, with the help of on-board cameras, laser range finders and sonar equipment that helped them avoid walls. They also created words for each mapped location, using a database of syllables. With the mapping complete, the robots would reconvene and communicate their findings to each other, using mounted microphones and speakers. One bot, for example, would spit out a word it had created for the center of the maze (“jaya”), sending both of them off on a “race” to find that spot. If they ended up meeting at the center of the room, they would agree to call it “jaya.” From there, they could tell each other about the area they’d just come from, thereby spawning new words for direction and distance, as well. Schulz is now looking to teach her bots how to express more complex ideas, though her work is likely to hit a roadblock once these two develop a phrase for “armed revolt.”

Lingodroid robots develop their own language, quietly begin plotting against mankind originally appeared on Engadget on Wed, 18 May 2011 11:07:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceIEEE Spectrum  | Email this | Comments

Rescue robots map and explore dangerous buildings, prove there’s no ‘I’ in ‘team’ (video)

We’ve seen robots do some pretty heroic things in our time, but engineers from Georgia Tech, the University of Pennsylvania and Cal Tech have now developed an entire fleet of autonomous rescue vehicles, capable of simultaneously mapping and exploring potentially dangerous buildings — without allowing their egos to get in the way. Each wheeled bot measures just one square foot in size, carries a video camera capable of identifying doorways, and uses an on-board laser scanner to analyze walls. Once gathered, these data are processed using a technique known as simultaneous localization and mapping (SLAM), which allows each bot to create maps of both familiar and unknown environments, while constantly recording and reporting its current location (independently of GPS). And, perhaps best of all, these rescue Roombas are pretty teamoriented. Georgia Tech professor Henrik Christensen explains:

“There is no lead robot, yet each unit is capable of recruiting other units to make sure the entire area is explored. When the first robot comes to an intersection, it says to a second robot, ‘I’m going to go to the left if you go to the right.'”

This egalitarian robot army is the spawn of a research initiative known as the Micro Autonomous Systems and Technology (MAST) Collaborative Technology Alliance Program, sponsored by the US Army Research Laboratory. The ultimate goal is to shrink the bots down even further and to expand their capabilities. Engineers have already begun integrating infrared sensors into their design and are even developing small radar modules capable of seeing through walls. Roll past the break for a video of the vehicles in action, along with full PR.

Continue reading Rescue robots map and explore dangerous buildings, prove there’s no ‘I’ in ‘team’ (video)

Rescue robots map and explore dangerous buildings, prove there’s no ‘I’ in ‘team’ (video) originally appeared on Engadget on Tue, 17 May 2011 17:58:00 EDT. Please see our terms for use of feeds.

Permalink CNET  |  sourceGeorgia Tech  | Email this | Comments

Mizzou Professor says nantenna solar sheet soaks up 90 percent of the sun’s rays, puts sunscreen to shame

Photovoltaics suffer from gross inefficiency, despite incremental improvements in their power producing capabilities. According to research by a team led by a University of Missouri professor, however, newly developed nantenna-equipped solar sheets can reap more than 90 percent of the sun’s bounty — which is more than double the efficiency of existing solar technologies. Apparently, some “special high-speed electrical circuitry” is the secret sauce behind the solar breakthrough. Of course, the flexible film is currently a flight of fancy and won’t be generating juice for the public anytime soon. The professor and his pals still need capital for commercialization, but they believe a product will be ready within five years. Take your time, guys, it’s not like global warming’s getting worse.

[Image source: Idaho National Laboratory (PDF)]

Mizzou Professor says nantenna solar sheet soaks up 90 percent of the sun’s rays, puts sunscreen to shame originally appeared on Engadget on Tue, 17 May 2011 07:48:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUniversity of Missouri  | Email this | Comments

Sony makes floating-head telepresence avatars a reality, Sean Connery digs out gun and red speedos

The real world just got a little more Zardoz thanks to Tobita Hiroaki and his colleagues at Sony Computer Science Laboratory, who’ve built a telepresence blimp that projects the operator’s face across its meter-wide surface. The looming, translucent face can float about like any other blimp; an interior camera allows the user to see where it’s going. The whole thing is ominous in a completely different way from, say, a tiny googly-eyed robot perched on your shoulder, but something about its nearly silent movements still gives us the creeps – and unlike the Anybots QB, it’s not going to pick up your scone from the café. But if your dreams include having others bow before your god-like visage, you’ll have to wait awhile, as the technology’s still in its early stages. In the meantime, you can practice intoning “Zardoz is pleased!” while watching the video above.

Sony makes floating-head telepresence avatars a reality, Sean Connery digs out gun and red speedos originally appeared on Engadget on Mon, 16 May 2011 07:39:00 EDT. Please see our terms for use of feeds.

Permalink Robots.net  |  sourceNew Scientist  | Email this | Comments

Microsoft motion controller concept kicks sand in Kinect’s puny face

Think your body’s a temple? Turns out it’s actually just the antenna the temple’s staff uses to watch football when they’re done praying. A group of engineers from Microsoft Research showcased a technology at Vancouver’s Conference on Human Factors in Computing that offers gesture-based control on a scale that could make the company’s Kinect controller downright laughable. The team demonstrated how it could harness the human body’s reception of electromagnetic noise to create gesture-based computer interaction that does away with the need for a camera — though a receiver is worn on the body (the neck, in this case). The system uses the unique signals given off in different parts of the home to help measure the interaction, effectively turning one’s walls into giant control pads, which can regulate things like lighting and the thermostat. Hopefully games, too, because we can’t wait to play Pac-Man with our bedrooms.

Microsoft motion controller concept kicks sand in Kinect’s puny face originally appeared on Engadget on Wed, 11 May 2011 21:29:00 EDT. Please see our terms for use of feeds.

Permalink The Register  |  sourceMicrosoft  | Email this | Comments