Your Smartphone Gains a Mind of Its Own

Your Smartphone Gains a Mind of Its Own

A growing number of “smart” apps are using artificial intelligence algorithms in order to give you a more efficient and more personalized mobile experience.

    

Yandex, Russia’s ‘Homegrown Google’, Looks At Gesture-Based Interfaces To Power Apps

Yandex gesture social TV interface

Russian search giant Yandex has collaborated on developing an experimental gesture-based interface to explore how similar technology could be incorporated into future social apps and mobile products. The company offers digital services beyond search already, launching and expanding mapping services and translation apps, for instance, in a bid to drive growth as its domestic search share (60.5% as of Q4 2012) has not grown significantly in recent quarters. Future business growth for Yandex looks likely to depend on its ability to produce a pipeline of innovative products and services — hence its dabbling with gestures.

Yandex Labs, the division that came up with its voice-powered social search app Wonder (an app that was quickly blocked by Facebook), has been working with Carnegie Mellon University on a research project to create a gesture-based social interface designed for an Internet-connected TV. The interface, demoed in the above video, pulls in data from Facebook, Instagram and Foursquare to display personalised content that is navigated by the TV viewer from the comfort of their armchair using a range of hand gestures.

Here’s how Yandex describes the app on its blog:

The application features videos, music, photos and news shared by the user’s friends on social networks in a silent ‘screen saver’ mode. As soon as the user notices something interesting on the TV screen, they can easily play, open or interact with the current media object using hand gestures. For example, they can swipe their hand horizontally to flip through featured content, push a “magnetic button” to play music or video, move hands apart to open a news story for reading and then swipe vertically to scroll through it.

The app, which was built on a Mac OS X platform using Microsoft’s Kinect peripheral for gesture recognition, remains a prototype/research project, with no plans to make it into a commercial product. But Yandex is clearly probing the potential of gestures to power future apps.

Asked what sort of applications it believes could be suitable for the tech, Grigory Bakunov, Director of Technologies at Yandex, said mobile apps are a key focus. “Almost any [Yandex services] that are available on mobiles now: search (to interact with search results, to switch between different search verticals, like search in pictures/video/music), probably maps apps and so forth [could incorporate a gesture-based interface],” he told TechCrunch when asked which of its applications might benefit from the research.

Bakunov stressed these suggestions are not concrete plans as yet — just “possible” developments as it figures out how gesture interfaces can be incorporated into its suite of services in future. ”We chose social newsfeeds to test the system [demoed in the video] as it can bring different types of content on TV screen like music listened by friends, photo they shared or just status updates. Good way to check all types in one app,” he added.

As well as researching the potential use-cases for gesture interfaces, Yandex also wanted to investigate alternatives to using Microsoft’s proprietary Kinect technology.

“Microsoft Kinect has its own gesture system and machine learning behind it. But the problem is that if you want to use it for other, non-Microsoft products you should license it (and it costs quite a lot), plus it has been controlling by Microsoft fully. So, one of the target was to find out more opened alternative with accessible APIs, better features and more cost-effective,” said Bakunov.

Yandex worked with Carnegie Mellon students and Professor Ian Lane to train gesture recognition and evaluate several machine learning techniques, including Neural Networks, Hidden Markov Models and Support Vector Machines — with the latter technique showing accuracy improvements of a fifth vs the other evaluated systems, according to Yandex.

The blog adds:

They [students] put a lot of effort in building a real training set – they collected 1,500 gesture recordings, each gesture sequenced into 90 frames, and manually labeled from 4,500 to 5,600 examples of each gesture. By limiting the number of gestures to be recognized at any given moment and taking into account the current type of content, the students were able to significantly improve the gesture recognition rate.

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Fed up with wandering through supermarket aisles in an effort to cross that last item off your shopping list? Researchers at Carnegie Mellon University‘s Intel Science and Technology Center in Embedded Computing have developed a robot that could ease your pain and help store owners keep items in stock. Dubbed AndyVision, the bot is equipped with a Kinect sensor, image processing and machine learning algorithms, 2D and 3D images of products and a floor plan of the shop in question. As the mechanized worker roams around, it determines if items are low or out of stock and if they’ve been incorrectly shelved. Employees then receive the data on iPads and a public display updates an interactive map with product information for shoppers to peruse. The automaton is currently meandering through CMU’s campus store, but it’s expected to wheel out to a few local retailers for testing sometime next year. Head past the break to catch a video of the automated inventory clerk at work.

Continue reading Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four originally appeared on Engadget on Sat, 30 Jun 2012 19:53:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Technology Review  | Email this | Comments