Insert Coin: the ixi-play robot owl monitors toddlers, helps them learn (video)

In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you’d like to pitch a project, please send us a tip with “Insert Coin” as the subject line.

Insert Coin the ixiplay robot owl monitors toddlers, helps them learn

Isn’t a baby monitor effectively a waste of technology? With a bit more thought and an operating system, couldn’t it do much more with its components than just scope your infant? That’s the premise behind Y Combinator-backed ixi-play, an Android-powered robot that just launched on the Crowdhoster crowdfunding platform. On top of Android 4.2, a dual-core ARM Cortex A9 CPU, 1GB RAM and a 720p camera, the owlish ‘bot has face, card and object detection, voice recognition, a touch-sensor on the head, eye displays for animations, a tweeter/woofer speaker combo and child-proof “high robustness.” For motion, the team adopted a design used in flight simulators, giving ixi-play “agile and silent” 3-axis translation and rotation moves.

All that tech is in the service of one thing, of course: your precious snowflake. There are currently three apps for ixi-play: a baby monitor, language learning and animal-themed emotion cards. As the video shows (after the break), the latter app lets your toddler flash cards to the bot to make it move or emote via the eye displays, matching the anger or happiness shown on the card. In baby monitor mode, on top of sending a live (encoded) video stream to your tablet, it’ll also play soothing music and sing or talk your toddler to sleep. The device will also include an SDK that includes low-level motion control and vision programming, providing a way for developers to create more apps. As for pricing, you can snap one up starting at $299 for delivery around July 24th, 2014, provided the company meets its $957,000 funding goal (pledges are backed by Crowdtilt). That’s exactly the same price we saw recently for far less amusing-sounding baby monitor, so if you’re interested, hit the source.

Filed under:

Comments

Source: Ixi-Play

Galaxy S 4, future Samsung devices to use DigitalOptics tech for face tracking

Samsung Galaxy S 4 with Smart Stay active

When Samsung unveiled the Galaxy S 4 in March, there was a near-inescapable emphasis on face detection features. What we didn’t know is just whose technology was making them possible. As it happens, it’s not entirely Samsung’s — DigitalOptics has stepped forward to claim some of the responsibility. The California firm recently struck a multi-year licensing deal with Samsung to supply its Face Detection and Face Tracking software, which can detect pupils for interface features (think Smart Stay or Smart Pause) and keep tabs on photo subjects. DigitalOptics hasn’t provided the exact details of its involvement in the GS4, let alone a roadmap, but it’s safe to presume that Samsung isn’t dropping its emphasis on camera-driven software anytime soon.

Filed under: , , ,

Comments

Source: DigitalOptics

Apple applies for patent that scales content to match face distance, save us from squinting

Apple tries for patent that scales content to match face distance, save us from squinting

Most software has to be designed around a presumed viewing distance, whether it’s up close for a smartphone or the 10-foot interface of a home theater hub. Apple has been imagining a day when the exact distance could be irrelevant: it’s applying for a patent that would automatically resize any content based on viewing distance. By using a camera, infrared or other sensors to detect face proximity through facial recognition or pure range, the technique could dynamically resize a map or website to keep it legible at varying ranges. Although the trick could work with most any device, the company sees that flexibility as most relevant for a tablet, and it’s easy to understand why — iPad owners could read on the couch without needing to manually zoom in as they settle into a more relaxed position. There’s no knowing the likelihood that Apple will implement an automatic scaling feature in iOS or OS X, let alone make it the default setting. If the Cupertino team ever goes that far, though, we’ll only have our own eyesight to blame if we can’t read what’s on screen.

Filed under: , , , ,

Apple applies for patent that scales content to match face distance, save us from squinting originally appeared on Engadget on Thu, 15 Nov 2012 11:14:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves (video)

Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves

We had a fun time trying Sony’s SOEmote expression capture tech at E3; now everyone can try it. As of today, most EverQuest II players with a webcam can map their facial behavior to their virtual personas while they play, whether it’s to catch the nuances of conversation or drive home an exaggerated game face. Voice masking also lets RPG fans stay as much in (or out of) character as they’d like. About the only question left for those willing to brave the uncanny valley is when other games will get the SOEmote treatment. Catch our video look after the break if you need a refresher.

Continue reading Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves (video)

Filed under: ,

Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves (video) originally appeared on Engadget on Tue, 07 Aug 2012 17:41:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSOEmote  | Email this | Comments

Apple patents iOS 5’s exposure metering based on face detection, keeps friends in full view

Apple patents exposure metering based on face detection, keeps friends in full view

Many photographers will tell you that their least favorite shooting situation involves a portrait with the sun to the subject’s back: there’s a good chance the shot ends up an unintentional silhouette study unless the shooter meters just perfectly from that grinning face. Apple has just been granted a patent for the metering technique that takes all the guesswork out of those human-focused shots on an iOS 5 device like the iPhone 4S or new iPad. As it’s designed, the invention finds faces in the scene and adjusts the camera exposure to keep them all well-lit, even if they’re fidgety enough to move at the last second. Group shots are just as much of a breeze, with the software using head proximity and other factors to pick either a main face as the metering target (such as a person standing in front of a crowd) or an average if there’s enough people posing for a close-up. You can explore the full details at the source. Camera-toting rivals, however, will have to explore alternative ideas.

Filed under: ,

Apple patents iOS 5’s exposure metering based on face detection, keeps friends in full view originally appeared on Engadget on Tue, 31 Jul 2012 19:34:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Second Story uses Kinect for augmented shopping, tells us how much that doggie is in the window (video)

Second Story uses Kinect for augmented shopping, tells you exactly how much that doggie is in the window video

Second Story isn’t content to leave window shoppers guessing at whether or not they can afford that dress or buy it in mauve. A new project at the creative studio uses the combination of a Kinect for Windows sensor with a Planar LookThru transparent LCD enclosure to provide an augmented reality overlay for whatever passers-by see inside the box. The Microsoft peripheral’s face detection keeps the perspective accurate and (hopefully) entrances would-be customers. Coming from an outlet that specializes in bringing this sort of work to corporate clients, the potential for retail use is more than a little obvious, but not exclusive: the creators imagine it also applying to art galleries, museums and anywhere else that some context would come in handy. If it becomes a practical reality, we’re looking forward to Second Story’s project dissuading us from the occasional impulse luxury purchase.

Continue reading Second Story uses Kinect for augmented shopping, tells us how much that doggie is in the window (video)

Filed under: ,

Second Story uses Kinect for augmented shopping, tells us how much that doggie is in the window (video) originally appeared on Engadget on Thu, 26 Jul 2012 02:57:00 EDT. Please see our terms for use of feeds.

Permalink Next at Microsoft, The Next Web  |  sourceSecond Story  | Email this | Comments

Google patent filing would identify faces in videos, spot the You in YouTube

Google patent filing would identify faces in videos, spot the You in YouTube

Face detection is a common sight in still photography, but it’s a rarity in video outside of certain research projects. Google may be keen to take some of the mystery out of those clips through a just-published patent application: its technique uses video frames to generate clusters of face representations that are attached to a given person. By knowing what a subject looks like from various angles, Google could then attach a name to a face whenever it shows up in a clip, even at different angles and in strange lighting conditions. The most obvious purpose would be to give YouTube viewers a Flickr-like option to tag people in videos, but it could also be used to spot people in augmented reality apps and get their details — imagine never being at a loss for information about a new friend as long as you’re wearing Project Glass. As a patent, it’s not a definitive roadmap for where Google is going with any of its properties, but it could be a clue as to the search giant’s thinking. Don’t be surprised if YouTube can eventually prove that a Google+ friend really did streak across the stage at a concert.

Google patent filing would identify faces in videos, spot the You in YouTube originally appeared on Engadget on Tue, 03 Jul 2012 15:11:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Don’t tell Google, but its latest X lab project is something performed by the great internet public every day. For free. Mountain View’s secret lab stitched together 1,000 computers totaling 16,000 cores to form a neural network with over 1 billion connections, and sent it to YouTube looking for cats. Unlike the popular human time-sink, this was all in the name of science: specifically, simulating the human brain. The neural machine was presented with 10 million images taken from random videos, and went about teaching itself what our feline friends look like. Unlike similar experiments, where some manual guidance and supervision is involved, Google’s pseudo-brain was given no such assistance.

It wasn’t just about cats, of course — the broader aim was to see whether computers can learn face detection without labeled images. After studying the large set of image-data, the cluster revealed that indeed it could, in addition to being able to develop concepts for human body parts and — of course — cats. Overall, there was 15.8 percent accuracy in recognizing 20,000 object categories, which the researchers claim is a 70 percent jump over previous studies. Full details of the hows and whys will be presented at a forthcoming conference in Edinburgh.

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats originally appeared on Engadget on Tue, 26 Jun 2012 07:22:00 EDT. Please see our terms for use of feeds.

Permalink SMH.com.au  |  sourceCornell University, New York Times, Official Google Blog  | Email this | Comments