Xbox Live’s “Major Nelson” (Larry Hryb) has posted two handy cheat sheets for the Xbox One‘s Kinect voice and gesture controls. The cheat sheets detail some of the more fundamental voice and gesture commands you can use at any one time, depending on the context of the task at hand. The resolution is a bit […]
Microsoft this weekend rolled out a few descriptions of how the voice and gesture controls will work in Internet Explorer for Xbox One. In a post on Exploring IE, the company explained that you will be able to say “Xbox, select” to bring up a voice command menu when the Kinect is on, followed by […]
Motion-tracking technology that allows you to control your smartphone from several feet away, even when it’s away on a nearby table, could show up in handsets as soon as next year. Elliptic Labs gesture control system uses tiny ultrasonic sensors to grant 180-degree awareness to phones and tablets, picking up hand movement from up to […]
How Gesture Control Actually Works
Posted in: Today's Chili Gesture control sure is cool
Kinect-based Computer Orchestra Uses Computers as Musicians: You Are the Conductor
Posted in: Today's ChiliNowadays it’s quite possible to create and play music live using a computer. You can also use MIDI controllers to make it easier for you to interact with music software and audio files. However, pushing keys and fiddling with knobs isn’t intuitive or fun to watch. Computer Orchestra manages to be both by letting you be a conductor of computers.
Computer Orchestra was made by three students from the art and design university ECAL. Simon de Diesbach, Jonas Lacôte and Laura Perrenoud designed it to be a crowdsourcing interface for uploading samples and then triggering them on different computers using simple hand gestures.
The idea is that you’ll upload samples to or download samples from a website, then you’ll assign those samples to your “musicians” – in this case, the members of the orchestra are all laptops. Using a Wi-Fi connection, a Kinect sensor, a programming language called Processing and the software library called SimpleOpenNI, you can then trigger those computers to play by waving your hands towards them. There also seems to be other gestures that vary the way the computers play the samples.
I know it’s very impractical, but it also seems like a lot of fun. Perhaps it’s possible to make a simpler version of this with a Leap controller and an array of color or light sensors. Using one laptop per sample seems like overkill, although it’s a sight to behold.
[via Designboom]
Apple’s M7 Motion Sensing Coprocessor Is The Wizard Behind The Curtain For The iPhone 5s
Posted in: Today's ChiliApple has a new trick up its sleeve with the iPhone 5s that was talked about on stage during its recent reveal event, but the impact of which won’t be felt until much later when it gets fully taken advantage of by third-party developers. Specifically, I’m talking about the M7 motion coprocessor that now takes the load of tracking motion and distance covered, requiring much less battery draw and enabling some neat new tricks with tremendous felt impact.
The M7 is already a boon to the iPhone 5s without any third-party app support – it makes the iPhone more intelligent in terms of when to activate certain features and when to slow things down and conserve battery life by checking less frequently for open networks, for instance. Because it’s already more efficient than using the main A-series processor for these tasks, and because changing these behaviours can themselves also save battery, the M7 already stretches the built-in battery to its upper limits, meaning you’ll get more talk time than you would otherwise out of a device that’s packing one.
Besides offering ways for Apple to make power management and efficiency more intelligent on the new iPhone 5s, the M7 is also available for third-party developers to take advantage of, too. This means big, immediately apparent benefits for the health and activity tracker market, since apps like Move or the Nike+ software demoed during the presentation will be able to more efficiently capture data from the iPhone’s sensors.
The M7 means that everyone will be able to carry a sensor similar to a Fitbit or equivalent in their pocket without having to cart around a separate device, which doesn’t require syncing via Bluetooth or worrying about losing something that’s generally tiny, plus there’s no additional wristwear required. And the M7′s CoreMotion API is open to all developers, so it’s essentially like carrying around a very powerful motion tracking gizmo in your pocket which is limited in function only by what developers can dream up for it.
So in the future, we’ll likely see gesture-controlled games (imagine the iPhone acting as a gesture controller for a title broadcast to Apple TV via AirPlay), as well as all kinds of fitness trackers and apps that can use CoreMotion to limit battery drain or change functionality entirely depending on where and when they’re being used, as detected by motion cues. An app might offer very different modes while in transit, for instance, vs. when it’s stationary in the home.
Apple’s iPhone 5s is an interesting upgrade in that much of what’s changed takes the form of truly innovative engineering advances, with tech like the fingerprint sensor, camera and M7 that are each, in and of themselves, impressive feats of technical acumen. That means, especially in the case of the M7, the general consumer might not even realize how much of a generational shift this is until they get their hands on one, and new software experiences released over the hardware’s lifetime will gradually reveal even more about what’s changed.
Kiwi Wearables Shows Off A Way To Use Its Personal Tracker Device To Make Music
Posted in: Today's ChiliSingle-function wearable devices are old-school and a massive waste of potential, according to a new Toronto-based startup called Kiwi Wearable Tech that’s building a hardware device as well as a cloud-based platform for leveraging data gathered from their wearables to build a wide variety of different experiences. The Kiwi team was at the Disrupt Hackathon this year, and built a demo app to show the power of its platform, which translates motion captured by its device into music using cloud-stored MIDI files.
Kiwi co-founders Zaki Hasnain Patel and Ashley Beattie say that the hack can use any kind of instrument that can be made into a MIDI-based output, and that since it works via the cloud, it’s possible for a number of “players” to use Kiwi-based instruments simultaneously for collaborative music creation.
The purpose of Kiwi is to turn its Move platform into something that developers can use to build a wide range of apps – you could have a fitness-tracking app like RunKeeper use it to track your activity, for instance, then use it for monitoring motions during a baseball swing in order to try to derive the optimal body movement for big hits, and then have the same device turn on your connected home lighting system and activate your home theatre when you get home using a series of gestures (in addition to measuring movement, the Kiwi Move can detect things like double taps on this surface and sides, too).
That’s only the beginning, however. Patel and Beattie say that they’re working on ways in which the Kiwi could help with early alerts for health problems – detecting heart attacks in advance, for instance, by keying into early warning sings. Beattie says that current methods make it possible to detect a heart attack up to 13 hours in advance, and that working with developers in the medical community, Kiwi could be able to provide a warning at least roughly 3 hours ahead of time, based on their current research. It’s another example where they’d be relying on the community to take advantage of their platform to advance the possibilities, but it’s an interesting example of what could be accomplished by not limiting wearable tracking to just a single purpose.
Kiwi has yet to ship any hardware, but it has a working prototype, is currently taking pre-orders via its website and plans to launch a crowdfunding campaign on September 24. Kickstarter is their target crowdfunding platform, since its launch in Canada and high-profile makes it a good option for a Toronto-based startup, but says it could consider other options, as well.
3D Gesture Control Is An Area Of Focus On Innovation We Likely Don’t Need Or Want
Posted in: Today's ChiliMinority Report was an enjoyable action flick, but it may hold the blame for getting the idea stuck in our collective heads that 3D gesture control is the next frontier for computing. The Kinect from Microsoft helped further this idea around as well, with a pretty good (though highly limited regarding needed space, applications, etc.) gesture experience. But a lot of startups and other companies are chasing this carrot – and it begs the question of whether there’s even a carrot to chase.
Maybe the most headline-grabbing of those going after the gesture control birdy is Leap Motion. The company raked in lots of pre-order interest for its device, which uses infrared tech to track finger and hand movements in 3D space and them map those to controls for apps on a computer. But then it arrived, and the reality was nothing like people had imagined, even after the device delayed its release for an extended beta to amp up the consumer user experience.
Leap Motion had good reason to go back to the drawing board: there’s a huge risk with this kind of device because when you aren’t just blown away by a device like this, it ends up in a drawer and no one ever uses it again. Unfortunately for the company, that’s likely the fate of a lot of their controllers, I realized after a couple of weeks of using one.
Early reviews were not very kind to the Leap Motion, but really a lot of them may have been over-generous. The controller is impressive enough during its demo when it’s showing you the finger points and hand model skeleton its detecting, but already it’s apparent that the detection is finicky. The controller is finicky in its appraisal, and requires your hands to occupy a sweet spot relative to the gadget itself to work really well.
Even when you’re in that zone, the problems don’t end. How each app uses gesture input varies, and things like web browsing with it are a definite pain. In the end, the fact is that on balance you get more frustration than pleasure out of the experience, and that’s not good for long-term adoption.
The experience of Leap Motion is flawed enough that it makes me wonder whether gesture control is actually something that it’s even possible to get right. Minority Report painted an idealized picture of how that might look, but it is, after all a work of fiction, and think about what the Tom Cruise character is actually doing in many of those scenes; wouldn’t it be easier to work with a traditional multimonitor setup and keyboard and mouse to accomplish the same thing?
There are a lot of people looking at gesture control right now, including Waterloo’s Thalmic Labs with its MYO armband, the new Haptix Kickstarter, and pmdtechnologies from Germany with their CamBoard pico. Microsoft is also refining and improving upon its Kinect for the upcoming Xbox One console.
Gesture input is a tempting area of focus, since it has clearly been a focus of lots of imaginative work for speculative and science fiction. Kinect and Wii showed us that large groups of people could enjoy that kind of device interaction, but those are in very specific contexts. Even if executed well, I’m not sure any solution is going to be anything other than a niche curiosity – we’ll probably see input take other, unexpected courses of evolution instead. They MYO and others could still prove me wrong (and I hope it does), but if you’ve got a farm to bet, I wouldn’t bet it on a gesture control revolution.
A few years ago we took a look at Pranav Mistry’s Mouseless, a prototype for a camera-based pointing device. Now, a startup called Haptix Touch is raising money on Kickstarter for a very similar – and possibly better – product. It’s called the Haptix, and I would love to trade my mouse for it.
Haptix turns any surface into a multitouch interface. It connects to computers via USB and uses two CMOS image sensors and a patent-pending algorithm. Like Mouseless, Haptix also has an infrared tracking mode for low light situations. In my brief chat with Haptix Touch Co-Founder Darren Lim, he said that the Haptix can track and assign different functions to up to 10 objects. For example, you can map your index finger to the mouse cursor, your thumb for left click, and so on. You can even tell it to ignore an object. This means you can use your table or desk as a touchpad, use a pen to draw or sketch in an image editing program or – my favorite – use your keyboard as your mouse.
Pledge at least $65 (USD) on Kickstarter to get a Haptix controller as a reward. The current version of Haptix works with Windows 8 and other touch-optimized programs out of the box. Lim said it will support Android and OS X devices by the time it’s commercially available, which is hopefully near the end of 2013. Lim also said they will release the developer API and dev kits after Haptix is launched.
If you had your eye on Ubi Interactive’s multitouch software, you’ll be glad to now that it’s now on sale. For those unfamiliar with the product, sit back and relax. I’ll take you to a world where any surface can become a touchscreen. As long as you have a computer that runs Windows 8. And a projector. And a Kinect. For Windows.
The Ubi program allows you to interact with Windows 8 programs from a projected display, as if your wall or canvas were a giant touchscreen. It uses Kinect for Windows – which is different from the one that works with the Xbox 360 – to map your fingers or hand and register their input.
Ubi Interactive says that Ubi will work with any projector as long as it has a “high enough intensity for the image to be visible in your lighting conditions.” The computer running Ubi doesn’t have to have a touchscreen itself. It just has to run Windows 8 and the resolution of the display being projected should be at least 720p. Its biggest restriction is that it will only work with Windows apps that have been optimized for touchscreens.
You can order Ubi from Ubi Interactive’s website; it costs between $149 to $1499 (USD) depending on the version you want. The Kinect isn’t included with the software, but then again the total cost of a Ubi setup is less than what you’d shell out for an actual wall-sized touchscreen.
[via CNET]