Leap Motion Lays Off 10% Of Its Workforce After Missing On First Year Sales Estimates

Leap Motion won a lot of buzz early on for its motion controller, which is designed to make it possible for users to interact with their computer through gestures alone. The early buzz and pre-order interest led to a lot of growth, with the company swelling to 120 employees at its peak. But disappointing reviews when the hardware actually shipped took some of the wind out of the startup’s… Read More

Oculus Rift-based virtual reality game could help restore 3D vision (video)

Diplopia VR game

Many will tell you that video games are bad for your eyes, but James Blaha doesn’t buy that theory. He’s developing a crowdfunded virtual reality title, Diplopia, that could help restore 3D vision. The Breakout variant trains those with crossed eye problems to coordinate their eyes by manipulating contrast; players score well when their brain merges two images into a complete scene. Regular gameplay could noticeably improve eyesight for adults that previously had little hope of recovering their depth perception, Blaha says. The potential solution is relatively cheap, too — gamers use an Oculus Rift as their display, and they can add a Leap Motion controller for a hands-free experience. If you’re eager to help out, you can pledge $20 to get Diplopia, and $400 will bundle the app with an Oculus Rift headset. Check out a video demo of the therapeutic game after the break.

Filed under: ,

Comments

Via: Hack A Day

Source: Indiegogo

Elliptic Labs releases ultrasound gesturing SDK for Android, will soon integrate into smartphones

Elliptic Labs releases ultrasound gesturing SDK for Android, will soon integrate into smartphones

Elliptic Labs has already spruced up a number of tablets by adding the ability to gesture instead of make contact with a touchpanel, and starting this week, it’ll bring a similar source of wizardry to Android. The 20-member team is demoing a prototype here at CEATEC in Japan, showcasing the benefits of its ultrasound gesturing technology over the conventional camera-based magic that already ships in smartphones far and wide. In a nutshell, you need one or two inexpensive (under $1 a pop) chips from Murata baked into the phone; from there, Elliptic Labs’ software handles the rest. It allows users to gesture in various directions with multiple hands without having to keep their hands in front of the camera… or atop the phone at all, actually. (To be clear, that box around the phone is only there for the demo; consumer-friendly versions will have the hardware bolted right onto the PCB within.)

The goal here is to make it easy for consumers to flip through slideshows and craft a new high score in Fruit Ninja without having to grease up their display. Company representatives told us that existing prototypes were already operating at sub-100ms latency, and for a bit of perspective, most touchscreens can only claim ~120ms response times. It’s hoping to get its tech integrated into future phones from the major Android players (you can bet that Samsung, LG, HTC and the whole lot have at least heard the pitch), and while it won’t ever be added to existing phones, devs with games that could benefit from a newfangled kind of gesturing can look for an Android SDK to land in the very near future.

Mat Smith contributed to this report. %Gallery-slideshow99597%

Filed under: , ,

Comments

Source: Elliptic Labs

Apple Patents iOS Unlocking Methods That Determine Level Of User Access To Device Features And Software

iphone-ios7-unlock-hero

A big request from parents regarding iOS has been that Apple implement user accounts on its mobile devices, in order to make it so that a parent can sign in with greater access to device features and apps than a child, for instance. It’s a system that already has an analogue on the desktop, and that Google has seen fit to implement with multiple user accounts with varying levels of permissions for tablets running Android 4.3 and higher. A new patent granted to Apple today (spotted by AppleInsider) describes a way of changing device access depending on who’s doing the accessing.

Apple’s newly awarded patent describes a system wherein the method used by a user to unlock a device via gesture-based input would determine what apps are made available, as well as what hardware functions are available. So, for instance, one gesture (could be the drawing of a specific shape or letter with a fingertip) might allow access only to games content on the phones, while another could offer up access to an entire category of apps provided through corporate deployment, but not to other features.

The system Apple has patented also allows for gestures to unlock the phone directly into specific apps, so that one could launch the email app and keep a user within that bit of software exclusively, for instance. Other incarnations could limit access to certain phone features, including the camera and mic, or to in-app purchases, locking down a device for worry-free sharing with a child.

Aside from finally effectively enabling “guest mode” on a device, this patent in action would allow Apple to build a lockscreen launcher that can be operated not only via gestures, but also by voice and by keyboard, mouse or stylus events (all of which are covered by the patent). The potential applications, for use not only among parents but also in schools, in secure data enterprise environments and more are extensive, so hopefully this is one of the patents that Apple actually puts into practice.

Homebrew Kinect app steers Chromecast streams through gestures (update: source code)

Homemade app uses Kinect to steer Chromecast streams video

Chromecast may deliver on promises of sending wire-free video to TVs, but it’s not hands-free — or at least, it wasn’t. Leon Nicholls has unveiled a homemade Kinect app for the desktop that gives him gesture-based control of videos playing through Google’s streaming stick. While there’s just two commands at this point, Nicholls hopes to open-source the code in the near future; this isn’t the end of the road. If you can’t wait that long, though, there’s a quick demonstration available after the break.

Update: A few days later, Nicholls has posted the source code for his project; you’ll need to whitelist your Chromecast for development to use it.

Filed under: , ,

Comments

Source: Leon Nicholls (Google+)

Gesture In The Picture, As Intel Picks Up Omek But PrimeSense Dismisses Apple Acquisition Rumors

omek grasp

Yet more exits for Israeli startups, with the latest two developments a throwback to the hardware and engineering muscle that raised the tech profile of the region in the first place, before the Waze’s of the world got us thinking about Israel as a hotbed of consumer internet companies.

Today, reports leaked out, and we have now confirmed, that Intel has acquired Omek Interactive, a company it had already invested in that makes technology for gesture-based interfaces. At the same time, Israel publication the Calcalist is reporting that Apple is circling around PrimeSense, another developer of gesture-based technology that has been used in Microsoft’s Kinect. Together, the moves could be a sign that gesture-based controls such as those in Microsoft’s Kinect may become even more prevalent.

The Apple/PrimeSense talk, however, appears to be too early, if not altogether inaccurate. The Calcalist’s report notes that this is based around some meetings between the two companies, and that the price for the deal would be around $280 million. But a source at the company described the report as “BS.”

This is “journalist delusion based on unverified and twisted hints,” the source added, also questioning the valuation: “280M? Come on! We’re worth 10 times that. ” Up to now, PrimeSense has raised nearly $30 million from investors that include Gemini Israel Funds, Canaan Partners, Genesis Partners and Silver Lake Partners and bills itself as “giving digital devices the gift of sight.”

Meanwhile, we have contacted Omek, where the person we tracked down on the phone giggled (yes) and then referred us to Intel for any questions.

We have yet to hear back from Intel or investing arm Intel Capital. A post on Harretz notes the deal actually concluded last week. Haaretz has also managed to get a confirmation directly from Intel: “The acquisition of Omek Interactive will help increase Intel’s capabilities in the delivery of more immersive perceptual computing experiences,” the statement says.
Update: Intel has confirmed to me that the transaction has closed. In addition to the same statement it gave Haaretz, an Intel spokesperson added it’s not confirming the value of the deal, and “we are also not disclosing the timelines on future products that integrate this technology.”

The reported value of Intel’s deal for Omek is between $30 million and $50 million. Without actually hearing from Intel on the details, for now there appears to be a few lines of thinking behind why Intel is going beyond being simply a strategic investor. (Omek has raised $13.8 million to date, with $7 million of that coming from Intel Capital.)

The first of these — as explained in a story in VentureBeat, which first reported talks between the two in March of this year — is that Omek may have been in the market to raise more money and that it chose the exit route instead of going it alone.

Another is that Intel wants the technology as part of its bigger moves into 3D visualization and “perceptual computing”, Intel’s catch-all term for gesture, touch, voice, and other AI-style sensory technologies. This is also the subject of a $100 million investment fund Intel launched in April.

And a third is more mundane and cynical, and potentially true regardless of Intel’s wider, more airy ambitions. The blog GeekTime suggests that this is a hardware play: Intel wants Omek for technology that it can embed into chips. The more functionality it can add that will drive new purchases of those chips by device makers, the better:

“The search for worthy power eating technologies to justify the need for yearly chip version upgrades is an integral part of the hardware industries market management strategy,” it writes. “Device companies must be convinced of the need to design their products to support the more expensive vanguard models of the processing world, placing the need for innovation above price point, and even quality in some cases.”

Whether or not the PrimeSense news is accurate, 9to5Mac makes a convincing argument for how the startup’s intellectual property could fit in with other IP at Apple already; and with Apple’s bigger ambitions to develop products that take it further into the living room, specifically with Apple TV.

And that, in the end, seems to be the crux of today’s news as well. However you cut it, and whoever ends up controlling it (in the tech sense), gesture is increasingly coming into focus and will let us get machines to do our bidding with the wave of a hand, or finger, soon.

Google updates Gesture Search, now recognizes over 40 languages

Google updates Gesture Search, now recognizes over 40 languages

Gesture lovers and polyglots rejoice! Yesterday, Google updated Gesture Search for Android phones and tablets, making it compatible with even more languages. The app provides quick access to music, contacts, applications, settings and bookmarks — to name some — by letting users simply draw characters on the screen. It now recognizes over 40 languages and even handles transliteration, which comes in handy in Chinese, for example, where some native characters require more strokes than latin equivalents. Gesture Search started life as a Google Labs project back in March 2010 and received several tweaks over the years, including tablet support last fall. So go ahead: download the latest version from the Play Store and swipe away.

Filed under: , , ,

Comments

Source: Google (Google+)

Panasonic Eluga P P-03E takes on Samsung with its own air gestures (video)

Panasonic Eluga P P03E centers on air gestures, says two can play Samsung's game

Don’t think that the Galaxy S 4 has a lock on the concept of touch-free input. Panasonic has bolstered NTT DoCoMo’s summer lineup with the Eluga P P-03E, a 4.7-inch Android phone whose one-handed interface can involve even less finger contact than Samsung’s flagship. Its central Touch Assist feature lets owners unlock their phone, answer calls, preview content and enter text by hovering a digit just above the glass. The handset is no slouch outside of its signature trick, either — it carries a 1080p LCD, a 1.7GHz Snapdragon 600 processor, 32GB of expandable storage and a sizable 2,600mAh battery. Japanese customers will have their chance at Panasonic’s above-the-screen magic in late June, although we wouldn’t count on the Eluga P reaching the US anytime soon.

Filed under: ,

Comments

Via: The Next Web

Source: Panasonic (translated)

Hands-on redux: Creative’s Interactive Gesture Camera at IDF 2013 Beijing (video)

Handson with Creative's Interactive Gesture Camera at IDF Beijing 2013 video

At IDF 2013 in Beijing, Intel is again making a big push for perceptual computing by way of voice recognition, gesture control, face recognition and more, and to complement its free SDK for these functions, Intel’s been offering developers a Creative Interactive Gesture Camera for $149 on its website since November. For those who missed it last time, this time-of-flight depth camera is very much just a smaller cousin of Microsoft’s Kinect sensor, but with the main difference being this one is designed for a closer proximity and can therefore also pick up the movement of each finger.

We had a go on Creative’s camera with some fun demos — including a quick level of gesture-based Portal 2 made with Intel’s SDK — and found it to be surprisingly sensitive, but we have a feeling that it would’ve been more fun if the camera was paired up with a larger display. Intel said Creative will be commercially launching this kit at some point in the second half of this year, and eventually the same technology may even be embedded in monitors or laptops (remember Toshiba’s laptops with Cell-based gesture control?). Until then, you can entertain yourselves with our new hands-on video after the break.

Comments

Source: Intel

Xbox 720 Next-Gen “Console” to Be Worn on Wrist, Renamed XWatch

There’s been rampant speculation about what the new Xbox 720 (codenamed “Durango”) video game console might have in store for us later this year, and now we have some answers. As more and more companies jump on the smartwatch bandwagon, the it’s been revealed that the next-gen Xbox won’t be a console in the traditional sense at all. Instead, the entire gaming system will be worn on your wrist, now dubbed XWatch.

xwatch xbox 720 1

By putting the console on the wrist of its player, you’ll be able to play games anywhere you go. For multiplayer gaming, each player will need to wear their own XWatch, but there will no longer be a need for a Kinect, as the watch itself acts as the gesture controller for games. Guess this is how they’ll solve the problem of detecting more than four players as has been previously rumored.

xbox 720 gestures

In addition to acting as the game console and controller, you’ll be able to play games on the go, as the watch’s liquid-crystal display will be capable of playing a variety of old school games when not connected to your TV or a network.

xwatch xbox 720 2

There’s not much more detail known yet on the XWatch, but I’d expect well find out more in the coming weeks and months as the next-gen console wars heat up.

UPDATE: Happy April Fool’s Day!