Revenge of the quadrocopters: now they move in packs (video)

In case you didn’t find the original quadrocopter chilling enough, the GRASP Lab out of the University of Pennsylvania has gone and added a bit of cooperative logic to the recipe so that now multiple little drones can work together. Also upgraded with a “claw-like” gripper that allows it to pick up and transport objects, the newer quadrocopter can team up on its prey payload with its buddies, all while maintaining its exquisite balance and agility. Skip past the break to see it on video.

Continue reading Revenge of the quadrocopters: now they move in packs (video)

Revenge of the quadrocopters: now they move in packs (video) originally appeared on Engadget on Tue, 13 Jul 2010 05:02:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceTheDmel (YouTube)  | Email this | Comments

Intel’s smart TV remote will recognize you, tailor content to your wishes

It’s all about how you hold it, apparently. Intel’s Labs have churned out a proposal for a new user-identifying system to be embedded into remote controls. Given a bit of time to familiarize itself with particular users, this new motion sensor-equipped channel switcher is capable of correctly recognizing its holder just by the way he operates it. Taking accelerometer readings every 100 nanoseconds, the researchers were able to build a data set of idiosyncrasies about each person, which would then be applied the next time he picked up the remote. Alas, accuracy rates are still well short of 100 percent, but there’s always hope for improving things and for now it’s being suggested that the system could be employed to help with targeted advertising — which is annoying anyway, whoever it may think you are.

Intel’s smart TV remote will recognize you, tailor content to your wishes originally appeared on Engadget on Mon, 12 Jul 2010 07:36:00 EDT. Please see our terms for use of feeds.

Permalink New Scientist  |  sourceBranislav Kveton [PDF]  | Email this | Comments

1 in 10 fliers using in-flight WiFi, Aircell ‘thrilled’ with repeat usage rate

US airlines are still struggling to keep pace with their Asian contemporaries, and while we won’t be satisfied until each and every plane that soars over this great land has an integrated router, there’s no question that carriers seem to be racing to equip their fleets with in-flight WiFi. According to recent analyst reports, fewer than 10 percent of fliers are using the service, but on the other hand, one in ten fliers are. There’s obviously two ways of looking at this — in-flight WiFi is still a fledgling technology, and it’s only available on around a third of domestic flights. From that perspective, a 10 percent overall usage rate looks pretty impressive. But there’s no question that cost is a concern here, as is time; many fliers are using their moments in the air to actually disconnect for a change, and few corporations actually have policies in place to reimburse employees for WiFi charges accumulated in the air. Furthermore, fliers can’t even use their laptops for the first and last half-hour of flights, so unless you’re flying coast-to-coast, you may assume that only having an hour or so to surf just isn’t worth the hassle.

We pinged Aircell (the makers of Gogo, which is by far the dominant in-flight WiFi provider in America) for comment on the linked report, and while they wouldn’t comment specifically, they did confirm that they have been “thrilled” with repeat usage rates. The company’s own research has found that “61 percent of Gogo customers have used it again within 3 months,” which is a pretty fantastic attach rate. Now, if only it could get more people to try the service once, it may just be on its way to taking over the world. Or something. Full comment is after the break.

Continue reading 1 in 10 fliers using in-flight WiFi, Aircell ‘thrilled’ with repeat usage rate

1 in 10 fliers using in-flight WiFi, Aircell ‘thrilled’ with repeat usage rate originally appeared on Engadget on Thu, 08 Jul 2010 17:19:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourcePhysorg  | Email this | Comments

Christie creates baffling 3D HD CAVE ‘visual environment,’ or your average Halo display in 2020

Whenever the word “Christie” is involved, you can generally count on two things: 1) you can’t afford it and 2) you’ll want to afford it. The high-end projection company is at it once again, this time installing a truly insane visual environment at Weill Cornell Medical College in New York. The 3D HD CAVE is intended to help researchers find breakthroughs in biomedical studies, and while CAVE itself has been around for years, this particular version easily trumps prior iterations. For starters, it relies on eight Christie Mirage 3-chip DLP projectors, all of which have active stereo capabilities and can deliver a native resolution of 1,920 x 1,920. Yeah, that’s 3.68 megapixels per wall. The idea here is to provide mad scientists with a ridiculous amount of pixel density in an immersive world, but all we can think about is hooking Kinect and the next installment of Bungie’s famed franchise up to this thing. Can we get an “amen?”

Continue reading Christie creates baffling 3D HD CAVE ‘visual environment,’ or your average Halo display in 2020

Christie creates baffling 3D HD CAVE ‘visual environment,’ or your average Halo display in 2020 originally appeared on Engadget on Thu, 08 Jul 2010 16:20:00 EDT. Please see our terms for use of feeds.

Permalink About Projectors  |   | Email this | Comments

Will the iPad Make You Smarter?

A growing chorus of voices argue that the internet is making us dumber. Web-connected laptops, smartphones and videogame consoles have all been cast as distracting brain mushers. But there’s reason to believe some of the newest devices might not erode our minds. In fact, some scientists think they could even make us smarter.

Could the cleaner and more modern interfaces that we see on iPads, iPhones and Android smartphones better suit the way our minds were meant to work?

While doing research for my upcoming technoculture book titled Always On, I posed the question to Muhammet Demirbilek, an assistant professor of educational technology at Suleyman Demirel University, whose findings suggest newer mobile interfaces could foster focus and improve our ability to learn.

“The interface of [the] iPad could work well for us,” Demirbilek told me. “We use our hands instead of a keyboard or mouse, and it fits exactly how we behave and think in real life. In addition, the iPad interface looks easier for us, because it has larger-size text and bigger icons. It is less likely to cause cognitive overload to the user, based on my studies.”

This idea challenges the conclusions of web cynics like Nicholas Carr. In his new book, The Shallows, Carr draws on a plethora of studies that collectively conclude the internet is shattering our focus and rewiring our brains to make us shallower thinkers. However, these arguments may not apply to the newest wave of devices.

Though scientists haven’t had a chance to study the implications of the cleaner and more modern interfaces that we see on iPads, iPhones and Android smartphones, we can draw some inferences from previous studies on computer interface and brain activity.

In 2004, Demirbilek conducted a study on 150 students at the University of Florida to examine the effects of different computer windows interfaces on learning. He compared two interfaces — a tiled-windows interface, in which windows were displayed next to each other in their entirety, versus an overlapping-windows interface, in which windows were laid on top of each other like a spread-out stack of paper.

Inside a computer lab, the participants were split into two groups randomly assigned to work with the tiled-windows interface mode or the overlapping-windows mode. Each mode contained a multimedia learning environment requiring the students to complete certain tasks. Demirbilek measured the students’ disorientation — how likely they were to get lost in a document, and their cognitive load — the total amount of mental activity being taxed in the working memory.

To measure disorientation, each student’s Internet Explorer history file recorded the number of informational “nodes” that were accessed to complete each task — in other words, the number of steps each user took before finishing an activity. For each task, a user was deemed either oriented or completely lost based on the number of nodes accessed.

To measure cognitive load, the students were timed on how long they took to react to different interactions. For instance, in one part of the study, the participants were required to click a button as soon as the background color of a window changed.

After completing his study, Demirbilek found that subjects using the tiled-windows interface were significantly less disoriented than subjects using an overlapping-windows interface. He also found that participants working with overlapping windows were substantially more likely to experience cognitive overload than those working with tiled windows.

In conclusion, students using the tiled-windows interface were able to find specific information more easily and engage with it more deeply, whereas students working with overlapping windows struggled to see how parts of a knowledge base were related, and they often omitted large pieces of information. Students using the tiled-windows interface were able to learn considerably better than those working with overlapping windows.

“The tiled-windows interface treatment provided help to users, enabling them to efficiently communicate with the hypermedia learning environment,” Demirbilek wrote in his research paper.

Demirbilek’s conclusions don’t contradict Carr’s assertions, but they suggest that the gap where information is lost between short-term memory and long term-memory is not due solely to hyperlinking, but also to the disorienting nature of the interface used. Carr is correct that the traditional PC computing environment (such as Windows or Mac OS X), which uses an overlapping-windows interface, is conducive to shallower learning.

However, Carr’s cited studies focus on interfaces that will soon be out-of-date. Newer mobile devices such as the iPhone, iPad and Android smartphones abolish the traditional graphical user interface we’re accustomed to. Gone are the mouse pointer and the mess of windows cluttering our desktop. On these mobile technologies — especially the iPad with its bigger 9.7-inch display — all the emphasis is placed on the content, and each launched app completely takes over the screen. The only pointers are our fingers. And going forward, we can expect future tablet computers competing with the iPad to replicate the single-screen interface.

Additionally, as touchscreen tablet computer users continue to grow, more web developers will feel pressured to scrap the busy website interfaces we’re accustomed to today. The drab, cluttered websites with squint-inducing boxes will be refreshed with large, touchable icons. Demirbilek and I agree that the iPad-driven tablet revolution is poised to improve user orientation and learning.

Of course, the iPad is less than a year old, and it has some work to do. By only displaying one app or one piece of content at a time, the iPad solves one problem while creating another.

A 1999 experiment on windows interfaces conducted by researchers at the University of Minnesota found that fourth-grade students using multiple windows were able to answer quiz questions more quickly and score significantly higher than students working with a single window.

In conclusion, they found that multiple windows, displayed in their entirety, assisted in completing tasks where more than one source of information is needed to solve a problem.

The iPad’s single-screen interface reduces elements of distraction and potentially enhances user orientation, but because of the lack of windows, it also eliminates the ability to read information from multiple sources simultaneously on a single screen to complete more complex tasks. This shortcoming is what makes the iPad lacking as a productivity device for doing work. But problems like this can be solved over time with software updates.

And even though the iPad isn’t yet ideal for professionals, that’s just one audience for the device, Demirbilek said. He believes the iPad has already introduced an interface beneficial to learning, especially for children.

“I think that the interface of [the] iPad could work well for young children because it maps onto how kids already do things in their daily life,” he said. “Sweeping things across the screen fits exactly with how very young children behave and think.”

Brian X. Chen is author of a book about the always-connected mobile future titled Always On, publishing spring 2011 by Da Capo (Perseus Books Group).


Butler Robot Can Fetch Drinks, Snacks

Meet HERB, a robot from Intel’s research labs that can fetch drinks, get a pack of chips and sort dishes. HERB or the Home Exploring Robotic Butler is a project from Intel’s personal robotics group.

The robot sits on a tricked-out Segway base and has arms that are driven by cables to allow it to be extremely dexterous. A spinning laser on the top of the robot help generates 3-D data so robot can identify objects. There’s also a camera to help it “see.”

“It (the robot) looks big but it will fit through most doorways,” says Siddhartha Srinivasa, an Intel researcher who is working on the project. “It’s about a foot longer than the human wingspan.”

Users can tell HERB what they need using an iPhone interface that the team built. There’s also a voice recognition program in the works so you can just tell the robot loud what you want it to do.

The HERB project has been in the works for nearly four years. Intel showed the robot’s latest features at its annual research day fest on Wednesday.

HERB is just one of the many robotics project that is trying to teach machines how to do everyday tasks.  Willow Garage, a Palo Alto, California based startup has a robot called PR2 that is being trained to sort laundry and fold towels.

The idea is to teach robots to go beyond carefully structured and repetitive tasks so they can move beyond factories.

Check out the video of HERB at work. HERB doesn’t move fast but if you could just sit on the couch and have it bring a bottle of beer every time, a few seconds delay shouldn’t bother you that much.

See Also:

Photo: HERB/ Priya Ganapati


Intel Researchers Turn Counter Tops Into Touchscreens

A research project from Intel can turn any surface into a touchscreen. Instead of propping up a tablet or putting a touchscreen computer in your kitchen, picture yourself tapping on the counter top to pull menus, look up recipes and add items to a shopping list.

“There’s nothing absolutely special about the surface, and it doesn’t matter if your hands are dirty,” says Beverly Harrison, a senior research scientist at Intel. “Our algorithm and a camera set-up can create virtual islands everywhere”

Intel demoed the project during the company’s annual research-day fest Wednesday to show touchscreens can go beyond computing and become a part of everyday life.

The project uses real-time 3-D object recognition to build a model of almost anything that’s placed on the counter and offer a a virtual, touchscreen-based menu. For instance, when you put a slab of meat on the counter or a green pepper, they are identified, and a virtual menu that includes recipes for both are shown.

“The computer in real time builds a model of the color, shape, texture of the objects and runs it against a database to identify it,” says Harrison. “And it requires nothing special to be attached on the steak or the pepper.”

Smartphones have turned touch into a popular user interface. Many consumers are happy to give the BlackBerry thumb a pass and instead swipe and flick their finger to scroll. New tablets are also likely to make users want to move beyond a physical keyboard and mouse.

But so far, touchscreens have been limited to carefully calibrated pieces of glass encased in the shell of a phone or computer.

Intel researchers say that won’t be the case in the future. An ordinary coffee table in the living room could morph into a touchscreen when you put a finger on it, and show a menu of music, video to choose from. Or a vanity table in the bathroom could recognize a bottle of pills placed on it and let you manage your medications from there.

Some companies are trying to expand the use of touchscreens. For instance, Displax, based in Portugal, can turn any surface — flat or curved — into a touch-sensitive display by sticking a thinner-than-paper polymer film on that surface to make it interactive.

Intel research labs try to do away with the extra layer. Instead, researchers there have created a rig with two cameras, one to capture the image of the objects and the other to capture depth. The depth cameras help recognize the objects and the difference between the hand touching the table or hovering over it. A pico-projector helps beam the virtual menus. The cameras and the pico-projector can be combined into devices just a little bigger than your cellphone, says Harrison. Sprinkle a few of these in different rooms and point them on tables, and the system is ready to go.

At that point, the software program that Harrison and her team have written kicks in. The program, which can run on any computer anywhere in the house, helps identify objects accurately and create the virtual menus. Just make a wide sweeping gesture to push the menu off the counter and it disappears. There’s even a virtual drawer that users can pull up to store images and notes.

Harrison says all this will work on almost any surface, including glass, granite and wood.

“The key here is the idea requires no special instrumentation,” she says.

Still it may be too early to make plans to remodel the kitchen to include this new system. The idea is still in the research phase, says Harrison, and it may be years before it makes it to the real world.

Photo: A counter top acts as a touchscreen display.
Priya Ganapati/Wired.com

See Also:


Smartphones With Intel Chips to Debut Next Year

Intel’s attempt to get inside cellphones will take just a little bit longer.Though the company had hoped to get smartphones with Intel chips in the hands of consumers this year, it is likely that the first phones powered by Intel will debut early next year.

Mobile handsets featuring Intel processors are likely to be shown at the Consumer Electronics Show in next January or at the Mobile World Congress conference in February.

“That would clearly be the window of opportunity for us,” Intel CTO Justin Rattner told Wired.com.

In May, Intel showed its new chip codenamed Moorestown for mobile devices. The company said the chip would be extremely power efficient, while offering enough processing power for features such as video conferencing and HD video.

Though Intel’s chips power most desktops and notebooks, the company’s silicon is glaringly absent in the fast growing category of smartphones and tablets. Worldwide, companies shipped 54.7 million smartphones in the first quarter of 2010, up 56.7 percent from the same quarter a year ago, estimates IDC.  Most talked about smartphones today from companies such as Motorola and HTC that are powered by chips based on Intel rival ARM’s architecture.

Intel has tried its hand in the phone-chip business earlier, with little success. In 2006, the company sold its XScale ARM-based division to Marvell. More recently, it tried to pitch its current generation of Atom processors to smartphone makers but the chips were not accepted because they consumed too much power for phone use.

Moorestown processors can beat the competition, says Intel. Rattner hopes the chips will also go beyond smartphones and into tablets.

So far, Apple has sold more than 3 million iPads in just three months since the product’s launch. Apple uses its own chip for the iPad.

Rattner says tablets using Intel chips are on their way and will be available to consumers by the end of the year.

“Almost all the tablets at Computex (a trade show for PC makers held in Taiwan every year) were Intel-based devices,” he says. “There’s a tremendous amount of interest and activity in the tablet space.”

Yet Rattner says he is “cautious” in his hopes for the tablet market. Rattner does not own an iPad, but has an iPhone 3G S.

“A lot of people are saying that the tablet is the next netbook,” he says. “I am not so sure.” More than 85 million netbooks have been sold, since the devices became popular about three years ago.

Netbooks appealed to consumers because of their price, portability and their ability to offer a computing experience comparable to a notebook, say Rattner.

“With tablets, their utility remains to be seen,” he says. “The first generation of tablets including are missing some important things. The absence of a camera is especially baffling in the iPad.”

The iPad may have its flaws but for consumers it’s the only choice for now — unless you count the very flawed JooJoo.

Some tablet makers were waiting for Moorestown chips but Intel has already started production and handing it to manufacturers, says Rattner.

“Apple’s gotten everyone’s attention and they have set that bar,” he says. “For others now coming to market, they have to have something substantially more capable than the iPad and it is going to take time to get there.”

Photo: liewcf/Flickr

See Also:


3D displays and haptic interfaces come together in HIRO III

The Kawasaki and Mouri Laboratory at Gifu University in Japan are researching and developing a touch interface which, combined with 3D displays, could offer a new way to simulate the touching of objects. HIRO III is a haptic interface robot which can provide realistic kinesthetic sensations to the user’s hand and fingers, while the 3D display provides the visual experience. Possible applications include medical diagnostics training, but for now, HIRO III is still in the lab. Interestingly, we’ve seen a very similar — albeit more scholastic — take on the same idea very recently. Hit the video below for a fuller look at this one.

Continue reading 3D displays and haptic interfaces come together in HIRO III

Filed under: ,

3D displays and haptic interfaces come together in HIRO III originally appeared on Engadget on Tue, 29 Jun 2010 10:22:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceDigiInfo  | Email this | Comments

Researchers create functioning human lung on a microchip

Researchers at Harvard University have successfully created a functioning, respirating human ‘lung’ on a chip in a lab. Made using human and blood vessel cells and a microchip, the translucent lung is far simpler in terms of observation than traditional, actual human lungs (for obvious reasons), in a small convenient package about the size of a pencil eraser. The researchers have demonstrated its effectiveness and are now moving toward showing its ability to replicate gas exchange between lung cells and the bloodstream. Down the road a bit more, the team hopes to produce other organs on chips, and hook them all up to the already operational heart on a chip. And somewhere in the world, Margaret Atwood and her pigoons are rejoicing, right? Here’s to the future. Video description of the device is below.

Continue reading Researchers create functioning human lung on a microchip

Researchers create functioning human lung on a microchip originally appeared on Engadget on Mon, 28 Jun 2010 09:42:00 EDT. Please see our terms for use of feeds.

Permalink Gizmag, Switched  |  sourceHarvard University  | Email this | Comments