Leap Motion starts expanded beta, opens dev portal to the public, shows off Airspace app store (hands-on)

Leap Motion starts expanded beta, opens dev portal to the public, shows off Airspace app store handson

Slowly but surely Leap Motion is making its way toward a commercial release. Today, the company has announced it’s moving into the next phase of beta testing and that it will be opening up its developer portal to the public later in the week. While this still won’t get folks a Leap device any faster, it will let them dig into Leap’s tools and code base in preparation for when they finally get one. The move marks a shift from the company’s previous SDK-focused beta to a consumer-focused one that’ll serve to refine the UX in Windows and OSX. Within each operating system, there will be two levels of Leap control: basic, which essentially allows you to use Leap in place of a touchscreen, and advanced to allow for more 3D controls enabled by Leap’s ability to detect the pitch and yaw of hands in space.

CEO Michael Buckwald gave us this good news himself, and also gave us a preview of Airspace, Leap’s app store, and a few app demos for good measure. As it turns out, Airspace is a two-pronged affair — Airspace Store is showcase for all software utilizing the Leap API and Airspace Home is a launcher that keeps all the Leap apps that you own in one convenient place. There will be 50 apps in Airspace at the start of the beta, with offerings from pro tools and utility apps to casual games, and we got to see a few examples.

Comments

eyeSight software uses standard cameras to power 3D gesture controls (video)

DNP eyeSight

Turning regular ol’ devices into motion-activated wonders is all the rage these days, and a company called eyeSight is determined to stand out from the pack. The brains behind eyeSight claim to have developed a purely software-based solution for equipping PCs, TVs and mobile devices with 3D gesture controls using existing standard cameras. It sounds like a pretty sweet deal, but it all comes down to whether or not eyeSight can deliver on its potential. If it can, then it could be a promising sign that gesture-controlled technology is on its way to becoming more accessible for budget-conscious consumers, since a software setup would negate the need for costly hardware. Currently, the platform is limited to developer SDKs, but you can watch an eyeSight-powered Google Earth demo after the break.

Filed under:

Comments

PMD and Infineon to enable tiny integrated 3D depth cameras (hands-on)

PMD and Infineon show off CamBoard Pico S, a tiny 3D depth camera for integration video

After checking out SoftKinetic’s embedded 3D depth camera earlier this week, our attention was brought to a similar offering coming from Germany’s PMD Technologies and Infineon. In fact, we were lucky enough to be the first publication to check out their CamBoard Pico S, a smaller version of their CamBoard Pico 3D depth camera that was announced in March. Both reference designs are already available in low quantities for manufacturers and middleware developers to tinker with over USB 2.0, so the two companies had some samples up and running at their demo room just outside Computex.

Filed under:

Comments

WiSee uses WiFi signals to detect gestures from anywhere in your house (video)

DNP WiSee  video

Have you always dreamed of controlling your TV by flailing in the next room? Researchers at the University of Washington have just the system for you: WiSee, a gesture-recognition interface that uses WiFi to control things like sound systems and temperature settings. Since WiFi signals are capable of passing through walls, WiSee can detect gestures made from neighboring rooms, breaking free from the line-of-sight method relied on by devices like Kinect and Leap Motion. Unlike those two, WiSee doesn’t require an additional sensor; the software can theoretically be used with any WiFi-connected device and a router with multiple antennae to detect Doppler shifts created by movement. The prototype was tested in both an office environment and a two-bedroom apartment, and the team reported a 94% accuracy with a set of nine distinct gestures. If you watch the video, embedded after the break, you’ll notice that each user performs an identifying motion prior to the control gesture. It’s a trick the team picked up from studying Kinect’s solution for distinguishing between specific individuals in crowded rooms. Intrigued? Head over to the source link to read the report in full.

Filed under:

Comments

Via: The Verge

Source: University of Washington

SoftKinetic teases embedded 3D depth camera, coming to Intel devices next year (hands-on)

SoftKinetic previews its embedded 3D depth camera at Computex 2013 video

At Intel’s Computex keynote earlier today, the chip maker teased that it expects embedded 3D depth cameras to arrive on devices in the second half of 2014. Luckily, we got an exclusive early taste of the technology shortly after the event, courtesy of SoftKinetic. This Belgian company not only licenses its close-range gesture tracking middleware to Intel, but it also manufactures time-of-flight 3D depth cameras — including Creative’s upcoming Senz3D — in partnership with South Korea-based Namuga. Read on to see how we coped with this futuristic piece of kit, plus we have a video ready for your amusement.

Filed under: ,

Comments

Mad Genius’ Motion Capture System brings Sony’s break-apart controller idea to life, and then some

Mad Genius' Motion Capture System Sony's breakapart DualShock 3

Remember that break-apart DualShock 3 idea for motion control Sony had five years ago? A new company named Mad Genius Controllers has surfaced with a working prototype that shows such a contraption working in spades. The setup uses a splittable controller and a processing unit to enable seamless motion control and spacial tracking on any title and system. Because Mad Genius doesn’t use any accelerometers or cameras like the current consoles, its creator notes that accuracy of up to 1/100th of an inch is possible.

In a video demo with an Xbox 360 version of Skyrim and a modified Xbox gamepad, certain gestures and movements even automate menu selections like a macro. One instance shows the controller being split and held like bow and arrow, highlighting that both sides are tracked in relation to each other — not to mention that the in-game character’s weapon automatically changes without any menu-digging by the user. The current version is merely a wired proof-of-concept, but Mad Genius plans to eventually make it wireless and hit Kickstarter for funding. In the meantime, you can build up anticipation for yourself by checking out the nearly 10-minute long video demo after the break. All that’s left is the inevitable Oculus Rift tie-in (like we’ve just done with this post).

Filed under: ,

Comments

Source: Mad Genius Controllers (YouTube)

Instrument’s Map Diving for Chrome: like a Google I/O keynote, minus Sergey (video)

Instrument's Map Diving demo recreates a Google IO keynote, minus Sergey Brin

Let’s be honest: it’s doubtful we’ll ever get to directly recreate the skydiving antics of Google I/O 2012’s opening keynote. Some of us on the I/O 2013 floor, however, could get the next best thing. As part of a Google Maps API showcase, Portland-based Instrument has developed a Map Diving game for Chrome that has players soaring over real locations to reach Pilotwings-style checkpoints. The version that will be at the event links seven instances of Google’s web browser, each with its own display; gamers fly by holding out their arms in front of a motion camera like the Kinect or Wavi Xtion. Sergey Brin probably won’t be waiting for anyone on the ground once the demo’s over, but Instrument hints in a developer video (after the break) that there could be a take-home version of Map Diving after the code is tuned for a single screen. Either way, we can’t wait to give it a spin.

Filed under: , , ,

Comments

Via: The Verge

Source: Instrument

Oculus Rift’s Tuscany demo scores unofficial support for Razer Hydra (video)

Oculus Rift's Tuscany demo scores unofficial support for Razer Hydra (video)

Oculus Rift’s Tuscany demo was built with a good ol’ fashioned keyboard and mouse setup in mind, but now it’s unofficially scored support for motion controls. Sixense, the outfit behind Razer’s Hydra, has cooked up a custom version of the Italian-themed sample for use with their controller, and it gives gamers a pair of floating hands to pick up and manipulate objects. Originally shown at GDC, the tweaked experience is now up for grabs, and can even be played by those who don’t have a Rift — albeit with just the controller’s perks.

Booting up the retooled package offers users a new 3D menu, giving them options for arm length, crouching, head bobbing and a crosshair. It’s not the first project to combine Rift with Hydra, but it certainly helps illustrate the potential of such a setup. Sixense says it plans to release updates and the source code, and it recommends folks sign up for their project-specific email list and keep an eye on their forums for word on availability. Hit the source links below for the download, or head past the break to catch Road to VR’s hands-on with the Hydra-friendly Tuscan villa.

Filed under:

Comments

Via: Road to VR

Source: Sixense (1), (2)

Insert Coin: Duo kit lets you build your own 3D motion tracker

In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you’d like to pitch a project, please send us a tip with “Insert Coin” as the subject line.

Insert Coin Duo kit lets you build your own 3D motion tracker

Between the Kinect and Leap Motion, gesture control’s on just about everyone’s minds these days. There’s still a ways to go, certainly, before such devices become a mainstream method for interfacing with our PCs, but they’ve already become a ripe source of inspiration for the DIY community. Duo’s hoping to further bridge the gap between the two, with a “the world’s first 3d motion sensor that anyone can build.” The desktop sensor features two PS3 Eye cameras that can track hands and objects for a more natural interface with one’s computer. Duo’s unsurprisingly looking to crowdfund its efforts. A pledge of $10 or more will get you early access to the company’s SDK. For $40 you’ll get the case and instruction. Add $30 to that number, and you’ve got yourself the kit, which includes everything but the camera ($110 will get you all that). Check out the company’s plea after the break, and if you’re so inclined you can pledge at the source link below.

Filed under:

Comments

Source: Kickstarter

IntuiLab shows a tool to build Leap Motion apps, no coding chops required (video)

IntuiLab shows a tool to build Leap Motion apps, no coding chops required video

It’s entirely possible to build motion aware apps if you’ve got the know-how to wield a tool like the Kinect SDK. But what about the rest of us? IntuiLab may have the solution through an upcoming version of IntuiFace Presentation. The Windows software will let would-be developers create gesture-driven apps for the rapidly approaching Leap Motion controller using a simple trigger system. The results are self-evident in the video after the break: a basic app can react to finger pointing and swipes with comparatively little effort. While we’re not expecting any music games or other truly sophisticated releases, the updated IntuiFace could give us at least one avenue for our creativity when it launches in sync with the controller itself.

Filed under: ,

Comments

Source: IntuiFace Presentation