Microsoft Research brings mid-air multitouch to Kinect (video)

Microsoft research project shows hand gesture control

Shortly after the Kinect SDK first launched, it spawned a number of inspired efforts from researchers to make it do more than just track your body. Microsoft Research finally seems to be catching up to its own tech, as it just flaunted a recent project that allows fine-tuned gesture control, thanks to a newly developed talent for the motion sensing device to read whether your hand is open or closed. That let the team simulate multitouch-like capability on a PC as they air-painted basic images and manipulated Bing maps by varying their hand states. The hardware used doesn’t appear to be stock, so whether such new capability entails a rumored new version of the Kinect that may or may not appear on a (rumored) future Xbox, we’ll leave for you to decide.

Filed under: ,

Comments

Via: NeoWin

Source: Microsoft Research

Forget creepy Intel: SHORE unlocks your face at a glance, and it’s already in use

If you thought Intel’s plans for a viewer-watching Web TV box were intrusive, you might want to bury your face in your hands (and leave it there permanently) after seeing Fraunhofer‘s clever and creepy SHORE facial ID system. On show at CeBIT, SHORE can not only identify a face in a still image or real-time video stream, but figure out gender, age, and even what mood the person is in: happy, surprised, angry, or sad. Meanwhile, while Intel’s home entertainment tracking system is already mired in controversy, Fraunhofer tells us commercial implementations of SHORE are already out in the wild.

fraunhofer_shore_1

In Fraunhofer’s demo, a computer running SHORE was able to identify and classify multiple people walking in and out of frame, with the results of the analysis floated over each person on a wall display. The measurements happen almost instantaneously – the research institute says SHORE can identify a face at 107.5fps if it’s directly facing the camera, while full analysis including facial expression detection is at 45.5fps – and the system can handle head tilts of +/- 60-degrees and head rotation of +/- 90-degrees.

So far so good, but it’s the measurements not the identification which is what makes SHORE so impressive. Fraunhofer claims a 91.5-percent accuracy rate on face detection and a 94.3-percent accuracy rate on gender detection: by identifying the face, the eyes, nose, and mouth, and the rest of the facial shape, it can decide how happy or sad, angry or surprised they are. The extent of those criteria are displayed on red bars: when we smiled, it accurately picked up on that, while widening our eyes boosted our “surprised” rating.

fraunhofer_shore_8

More patchy was the age detection, which gives an estimate with a degree of confidence (so, for instance, SHORE could decide you’re 38, with a range of +/- 8-years). That proved susceptible to being confused by the ambient lighting: with strong ceiling lights, for instance, those wearing glasses were often confused for someone much older, because the shadows of the glasses frames on their cheeks were mistaken for evidence of old age.

Nonetheless, it’s a mighty impressive system in all, not least because of the incredibly low minimum specifications. Fraunhofer says SHORE will run on a single core of an Intel Core 2 Duo 6420 processor, under Windows XP, and with facial detection from anything down to an 8 x 8 pixel image (though you won’t get the more complex analysis). It’ll also run on mobile devices, such as smartphones, and can either be a standalone system or integrated into another, more complex monitoring package.

That flexibility – and the fact that Fraunhofer is licensing out the technology together with the offer of customizing it depending on client need – means the possibilities for implementation are far greater than, say, Intel’s proposed advertising tailoring on the Web TV box. Market research is an obvious one, for instance a camera above a store window display to track reactions of those glancing in, as well as customizing advertising playlists depending on the demographics of those watching. Car dashboards could monitor drivers and ensure they were alert and calm, as well as better track which person was giving which spoken command.

fraunhofer_shore_4

In hospitals, the degree of pain to which patients are suffering could be monitored autonomously, helping the more efficient use of painkillers (and avoiding unnecessary suffering). Augmented reality games are another possibility, but Fraunhofer is also keen on the idea of using the SHORE technology to enhance “virtual actors” and “intelligent agents” for customer services and entertainment, reacting to those they are talking to, behaving appropriately for their mood, and even mimicking that mood themselves. In fact, Fraunhofer had a robotic head which, using a camera in the forehead, could replicate the viewer’s expressions with animated eyes, mouth, and other elements.

Behind the scenes, the magic is in the huge amount of education Fraunhofer has given the system, teaching it to recognize common schema of mood and reaction from thousands of images of expressions. Called the Facial Action Coding System (FACS) it allows the computer to quickly calculate what each viewer is showing in a matter of milliseconds. That’s even if there are dozens of people in the frame, too; Fraunhofer showed the camera a print out covered in face thumbnails – over a hundred of them, packed tightly together – and SHORE spotted them all and ran its mood analysis. The system has short-term memory, too; Fraunhofer tells us that faces aren’t stored long-term, but there is a shorter-term caching system which can spot if a face was in-frame very recently, and collate all the data from each sighting. Each gets a temporary ID code, and a timer to show how long they were attentive for.

fraunhofer_shore_10

Perhaps most alarming is the fact that this isn’t a simple piece of prescient, Minority Report-style research: SHORE is already out in the wild. Fraunhofer couldn’t tell us all of its clients, but did confirm that market research firm GFK is using SHORE for its consumer surveys. There, participants simply allow the standard webcam on their computer to feed their expressions back to the server, as they watch a series of commercials or other content. Meanwhile, there are SHORE installations already watching passers-by from within store display windows, though Fraunhofer wouldn’t be pressed on which retailers exactly are using it.

Meanwhile, you can try it for yourself. Fraunhofer offers a free trial version of SHORE to download, as a proof of concept, which you can find here. Those particularly paranoid might prefer to spend their time knitting balaclavas, however, as the possibility that you’re being watched, analyzed, and generally figured out by a machine running something like SHORE is growing every day.

fraunhofer_shore_1
fraunhofer_shore_10
fraunhofer_shore_11
fraunhofer_shore_5
fraunhofer_shore_6
fraunhofer_shore_7
fraunhofer_shore_8
fraunhofer_shore_9
fraunhofer_shore_0
fraunhofer_shore_2
fraunhofer_shore_3
fraunhofer_shore_4


Forget creepy Intel: SHORE unlocks your face at a glance, and it’s already in use is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Hindenburg mystery solved 76 years later

76 years after the catastrophic explosion of the Hindenburg airship, the mystery of what caused that fatal accident has finally been solved according to researchers. The accident happened on May 6, 1937 and killed 35 of the 100 passengers and crewmembers aboard the airship. According to a team of experts that have been researching the accident, static electricity was the real trigger.

Hindenburg

The experts say that the ship flew into a thunderstorm resulting in a buildup of static electricity. That buildup of static electricity from the electrical storm combined with a broken wire or sticking gas valve that leaked hydrogen into the ventilation shafts resulted in the explosion. The researchers say that when ground crew members went out to grab the landing ropes to secure the aircraft, they “earthed” the ship causing a spark.

The fire that destroyed the aircraft is believed to have started on the tail where the leaking hydrogen was ignited. While researching the destruction of the airship, the team used scale models of the airship more than 24 m long in their experiments. According to the research team, they conducted these experiments to rule out some the theories on what caused the fatal accident.

Some previous theories had included a bomb and even odder theories were dispelled including one that suggested explosive properties of the paint used on the Hindenburg caused the explosion. The airship was 245 m long and was preparing to land at Lakehurst Naval Air Station in Manchester Township, New Jersey when the accident took place.

[via Dailymail I]


Hindenburg mystery solved 76 years later is written by Shane McGlaun & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Mobile users predicted to download 70 billion apps this year

If there’s one thing to know about smartphone and tablet users, it’s that they download a lot of apps. I have 93 apps and games currently on my smartphone, and I don’t consider that a large amount by any means, so when you take into account that every mobile user has a handful of apps on their devices, it can really add up. This year, though, research firm ABI Research predicts that mobile app downloads will hit 70 billion combined by the end of the year.

App-Store

Breaking that number down, it’s predicted that smartphone users will account for 56 billion of those app downloads, while tablet users will take the remaining 14 billion. As for the split between Android and iOS, ABI predicts that Android will take the majority of app downloads, with 58% of the pie, while iOS will claim 75% as far as tablet apps are concerned.

Besides Android’s 58%, iOS will garner 33% of all app downloads in 2013, while Windows Phone and BlackBerry will each get 4% and 3%, respectively. As for tablet apps, Android will represent 17% of all tablet apps downloaded, while Amazon’s tablets will take 4%, and Windows tablets will take 2% — mere crumbs from an app pie.

70 billion app downloads is a lot, and it goes to show that mobile devices are slowly taking over. It’s also interesting that Android is predicted to get most of the pie this year, considering that Google Play has had to play catch-up with the iTunes App Store for the past couple of years, but it looks like the Android app portal is finally gaining ground.


Mobile users predicted to download 70 billion apps this year is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

SpaceX Dragon to dock with International Space Station on Sunday

SpaceX’s Dragon capsule is officially confirmed to be docking with the International Space Station beginning Sunday, March 2nd, at 6:00 AM Eastern Standard Time, or 3:00AM Pacific Standard Time. The attachment should be completed around 10:00AM EST/7:00AM PST. Dragon experienced some issues shortly after entering orbit, which resulted in a one-day delay of its arrival. But the ISS should be receiving their supplies come tomorrow.

SpaceX Dragon to dock with International Space Station on Sunday

Dragon will be bringing new equipment and supplies to the folks at the ISS, and it will bring back to Earth some materials as well. This is the 3rd time Dragon has been launched, and it has about 9 more rounds to do this year due to SpaceX’s agreement with NASA. The great thing about Dragon is that it’s re-usable, so it has many more uses before it needs to be replaced.

The capture of Dragon will be done by NASA Expedition 34 Commander Kevin Ford and NASA Flight Engineer Tom Marshburn. The two astronauts will use the station’s robotic arm to grab Dragon. Dragon will then be installed onto the Earth-facing port of the Harmony module by mission control at Houston. Flight Engineer Chris Hadfield will finalize the grab by bolting Dragon into place using commands.

SpaceX states that there will not be another problem with Dragon’s thrusters, and that Dragon will be returning to Earth on its original arrival date, Monday, March 25th. Despite the issues that delayed its arrival to the ISS, SpaceX reports that everything is operating normally. You can watch Dragon dock at the ISS through SpaceX’s live webcast. The webcast will start streaming at 6:00AM EST/3:00AM PST.

[via NASA]


SpaceX Dragon to dock with International Space Station on Sunday is written by Brian Sin & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

SpaceX 2 Dragon struck by problems after reaching orbit

SpaceX and NASA’s second Dragon resupply mission to the International Space Station successfully blasted off on its way into space, but encountered unexplained issues roughly twelve minutes into launch. Taking off at 10:10AM EST today to take new equipment and supplies to the orbiting astronauts, the Dragon capsule, climbing at 1km per second atop a cluster of nine rockets, is carrying around 1,268 pounds of cargo and had been expected to dock with the ISS on Saturday, March 2. Update: More on the launch issues after the cut.

spacex_2_liftoff

There, Expedition 34 commander Kevin Ford and Flight Engineer Tom Marshburn of NASA had been expecting to snatch it from the sky with the station’s robotic arm. The exact issues around the problem are unclear at this point.

After 3:14, the first stage rockets detached – you can see the assembly dropping away in the left half of the image below – leaving the second stage rockets to push the capsule further out of the atmosphere. At 9:30 after launch, Dragon had reached orbit, with the capsule detaching from the second stage around 45s later.

dragon_separation

However, a few minutes after that point, the launch veered from the original plan. The video stream switched from Dragon back to the second stage, and then SpaceX cut the webcast, with a spokesperson saying that an unexpected problem had affected the capsule and that the team would be working on figuring out what was happening.

dragon_detached

As well as food and other essentials for the ISS crew, the Dragon capsule is packed with scientific experiments, including both biological and physics tests. On the biology side, there’ll be experiments to see how plant cells react in low-oxygen environments, as well as in microgravity, which NASA says will be instrumental in developing potential food sources for longer trips, such as to Mars.

On the physics side, there’ll be tests to see how molten metals solidify in microgravity, which could potentially open the door to new types of materials. Procter & Gamble is also funding some research, into how microscopic particles clump and gather in liquids and gels.

SpaceX and NASA will hold a press conference in several hours time to discuss the issues Dragon is facing.

Update: We’re hearing that the problem is that the solar panels on the Dragon capsule did not unfurl as expected, though we’re yet to see official confirmation on that from either NASA or SpaceX.

Update 2: SpaceX’s Elon Musk has tweeted that there is an “Issue with Dragon thruster pods. System inhibiting three of four from initializing. About to command inhibit override.”

Update 3: SpaceX has given us the following statement:

“One thruster pod is running. Two are preferred to take the next step which is to deploy the solar arrays.  We are working to bring up the other two in order to plan the next series of burns to get to station.”


SpaceX 2 Dragon struck by problems after reaching orbit is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

SpaceX 2 Dragon struck by problems after reaching orbit [Updated]

SpaceX and NASA’s second Dragon resupply mission to the International Space Station successfully blasted off on its way into space, but encountered unexplained issues roughly twelve minutes into launch. Taking off at 10:10AM EST today to take new equipment and supplies to the orbiting astronauts, the Dragon capsule, climbing at 1km per second atop a cluster of nine rockets, is carrying around 1,268 pounds of cargo and had been expected to dock with the ISS on Saturday, March 2. Update: More on the launch issues after the cut.

spacex_2_liftoff

There, Expedition 34 commander Kevin Ford and Flight Engineer Tom Marshburn of NASA had been expecting to snatch it from the sky with the station’s robotic arm. The exact issues around the problem are unclear at this point.

After 3:14, the first stage rockets detached – you can see the assembly dropping away in the left half of the image below – leaving the second stage rockets to push the capsule further out of the atmosphere. At 9:30 after launch, Dragon had reached orbit, with the capsule detaching from the second stage around 45s later.

dragon_separation

However, a few minutes after that point, the launch veered from the original plan. The video stream switched from Dragon back to the second stage, and then SpaceX cut the webcast, with a spokesperson saying that an unexpected problem had affected the capsule and that the team would be working on figuring out what was happening.

dragon_detached

As well as food and other essentials for the ISS crew, the Dragon capsule is packed with scientific experiments, including both biological and physics tests. On the biology side, there’ll be experiments to see how plant cells react in low-oxygen environments, as well as in microgravity, which NASA says will be instrumental in developing potential food sources for longer trips, such as to Mars.

On the physics side, there’ll be tests to see how molten metals solidify in microgravity, which could potentially open the door to new types of materials. Procter & Gamble is also funding some research, into how microscopic particles clump and gather in liquids and gels.

SpaceX and NASA will hold a press conference in several hours time to discuss the issues Dragon is facing.

Update: We’re hearing that the problem is that the solar panels on the Dragon capsule did not unfurl as expected, though we’re yet to see official confirmation on that from either NASA or SpaceX.

Update 2: SpaceX’s Elon Musk has tweeted that there is an “Issue with Dragon thruster pods. System inhibiting three of four from initializing. About to command inhibit override.”

Update 3: SpaceX has given us the following statement:

“One thruster pod is running. Two are preferred to take the next step which is to deploy the solar arrays.  We are working to bring up the other two in order to plan the next series of burns to get to station.”

Update 4: Elon Musk has tweeted that “Thruster pod 3 tank pressure trending positive” and that SpaceX is “preparing to deploy solar arrays.”

Update 5: “Solar array deployment successful” Elon Musk has tweeted.

Update 6: SpaceX gave us the following follow-up statement on its progress:

“Falcon 9 lifted off as planned and experienced a nominal flight. After Dragon achieved orbit, the spacecraft experienced an issue with a propellant valve. One thruster pod is running. We are trying to bring up the remaining three. We did go ahead and get the solar arrays deployed. Once we get at least 2 pods running, we will begin a series of burns to get to station.”


SpaceX 2 Dragon struck by problems after reaching orbit [Updated] is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Microsoft’s Future Vision has you clawing at the walls

Microsoft is gazing into the crystal ball again, with a new Future Vision concept video for how we’ll “Live, Work, Play” 5-10 years in the future, and you can apparently expect to be bathed in the glow of a million displays. The concept, the product of Microsoft’s freshly-revamped “Envisioning Center”, describes a touch and voice controlled home where media can be swiped, flicked, and generally shifted all over the home; your kitchen can identify the vegetables you’re holding; and every footstool will be equipped with its own Surface.

microsoft-future-vision-0

Concept videos are like catnip to designers and futurologists, who generally can’t resist peering into the future and guesstimating what sort of technology might be around. Microsoft’s vision, at least, seems pretty realistic: the huge touch display walls and projected interfaces are certainly possible – albeit prohibitively expensive for most – and the object recognition and media streaming could also be enabled with today’s gadgetry.

microsoft-future-vision-1

However, we’re still quite a way off from having everything interoperate in the way Microsoft shows. Various companies are showing tablets, phones, and smart TVs that link together for easier media placeshifting, but the systems are still in their infancy, and often only work successfully if every part of your consumer tech line-up comes from the same brand.

microsoft-future-vision-2

“While none of these ideas are meant to be predictive about our products,” Microsoft’s Steve Clayton says, “they do highlight some of the key trends we’re investing in, such as machine learning and NUI.” Whether the timescale the company has suggested will prove realistic remains to be seen.

[via Long Zheng]


Microsoft’s Future Vision has you clawing at the walls is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

New Tablet Camera Tech Conjures an Invisible Keyboard

Relying on your tablet’s on-screen keyboard saves you from having to carry clunky accessories, but it also gobbles up a good chunk of usable screen real estate. So Fujitsu researchers are working on a happy medium that uses the tablet’s camera to track your finger movements on a desk, as if you were typing away on an invisible keyboard. More »

Brain linked rats pave way for Gibson-esque meat crowd-computer

Technology that allows a direct link between the brains of two rats, allowing the behaviors of one animal to shape those of the other – even if they were thousands of miles apart – could pave the way to cognitive crowd-sourcing, researchers suggest. The experiment, in which microelectrodes a 1/100th the thickness of a human hair were inserted into the parts of the rats’ brains which handle motor information, saw one rat rewarded for hitting a specific lever in its cage, and then remotely tutoring its counterpart to select the correct lever in its independent cage by direct stimulation of its motor cortex.

pinky_and_the_brain_rats

The system basically learnt from the electrical activity in the part of the “encoder” rat’s brain, as it figured out which of the levers in its cage to press, and then stimulated the “decoder” rat’s brain with the same impulses. Although the second rat did eventually figure out which was the right lever on each test, however, demonstrating a roughly 70-percent success ratio, it wasn’t an instantaneous process.

Instead, it took scientists around 45 days – with the rats practicing for an hour each day – before the decoder animal became proficient. That appeared to be a sudden switch in understanding, however, rather than a gradual familiarity: “there is a moment in time when … it clicks” Professor Miguel Nicolelis of the Duke University Medical Center in North Carolina, where the research took place, said of the process.

brain-linked_rats

“Suddenly, the [decoder] animal realizes ‘Oops! The solution is in my head. It’s coming to me’ and he gets it right” the scientist says. To help that process along, the encoder rat was denied a treat whenever the decoder rat picked the wrong lever, a feedback system that encouraged sharper thoughts from the tutoring animal.

Although the current system uses a pair of rats – at times linking Duke University with a counterpart lab in Brazil – the scientists are already working on a version which will combine the thoughts of multiple animals. “You could actually have millions of brains tackling the same problem and sharing a solution” Nicolelis suggests, opening the door to a crowd-sourced problem solving engine of sorts.

“It is important to stress that the topology of BTBI [Brain-to-Brain Interface] does not need to be restricted to one encoder and one decoder subjects. Instead, we have already proposed that, in theory, channel accuracy can be increased if instead of a dyad a whole grid of multiple reciprocally interconnected brains are employed. Such a computing structure could define the first example of an organic computer capable of solving heuristic problems that would be deemed non-computable by a general Turing-machine” Professor Miguel Nicolelis, Duke University Medical Center, North Carolina

Nicolelis and his team also predict that one day – albeit a day several decades off – humans will be able to communicate and learn in this fashion, though it will take some clever cabling to actually make it practical. Currently, the microelectrodes require direct contact with points within the brain; while non-invasive brain monitoring equipment exists, it’s insufficiently precise for these purposes.

According to Nicolelis, the next stage of the research is to work on the crowd-crunching potential of the system, and measure its potential for computation in comparison to more traditional systems.

[via BBC]


Brain linked rats pave way for Gibson-esque meat crowd-computer is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.