Microsoft Research turns Kinect into canny sign language reader (video)

Microsoft Research turns Kinect into canny sign language reader

Though early Kinect patents showed its potential for sign language translation, Microsoft quashed any notion early on that this would become a proper feature. However, that hasn’t stopped Redmond from continuing development of the idea. Microsoft Research Asia recently showed off software that allows the Kinect to read almost every American Sign Language gesture via hand tracking, even at conversational speeds. In addition to converting signs to text or speech, the software can also let a hearing person input text and “sign” it using an on-screen avatar. All of this is still confined to a lab so far, but the researchers hope that one day it’ll open up new lines of communication between the hearing and deaf — a patent development we could actually get behind. See its alacrity in the video after the break.

Filed under: , , ,

Comments

Via: Gizmodo

Source: Microsoft Research

Google Hangouts receive sign language interpreter support, keyboard shortcuts

Google Hangouts receive sign language interpreter support, keyboard shortcuts

Video chat can be an empowering tool for hard-of-hearing internet citizens for whom sign language is easier than voice. Most chat software doesn’t easily bring an interpreter into the equation, however, which spurred Google into adding a Sign Language Interpreter app for Google+ Hangouts. The web component lets chatters invite an interpreter that stays in the background while they verbalize hand gestures. Google is also helping reduce dependencies on the mouse for those who can’t (or just won’t) use one during chat: there’s now keyboard shortcuts to start or stop chats, disable the camera and other basics that would normally demand a click. Both the interpreter app and shortcuts are available today.

Filed under: ,

Comments

Via: The Verge

Source: Anna Cavender (Google+)

Sigma R&D shows Kinect sign language and Jedi savvy to win gesture challenge (video)

Sigma shows Jedi and sign language skills to win gesture challenge with Kinect

Sigma R&D has won first prize in a gesture challenge to show just how much more talent — like sign language translation and light saber fun — can be unlocked in a Kinect. Normally the Microsoft device can only scope body and full mitt movements, but the research company was able to track individual fingers with a Kinect or similar sensor, plus its custom software, allowing a user’s hand to become a more finely tuned controller. To prove it, the company introduced a virtual lightsaber to a subject, tracking his swordsmanship perfectly and using his thumb extension to turn it on and off. The system even detected when a passing gesture was made, seamlessly making a virtual transfer of the weapon. The same tech was also used to read sign language, displaying the intended letters on the screen for a quick translation. The SDK is due in the fall, when we can’t wait to finally get our hands on a Jedi weapon that isn’t dangerous or plasticky. To believe it for yourself, see the videos after the break.

Continue reading Sigma R&D shows Kinect sign language and Jedi savvy to win gesture challenge (video)

Filed under: ,

Sigma R&D shows Kinect sign language and Jedi savvy to win gesture challenge (video) originally appeared on Engadget on Wed, 25 Jul 2012 10:57:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSigma R&D  | Email this | Comments