Worried that you’ve piled your plate too high at the buffet? Researchers at SRI—the folks who created Siri before Apple bought it—are working on a new app that uses image recognition and clever AI to provide a fairly accurate estimate of the calories you’re about to consume.
Image Recognition Startup Slyce Raises $10.75M To Be The Amazon Flow For Everyone Else
Posted in: Today's ChiliToronto-based startup Slyce has raised a new round of $10.75 million in funding, led by Beacon Securities, and including PI Financial, Salman Partners, Harrington Global and more. The company builds image recognition tech, and wants to be the Amazon Flow for every other retailer on the planet, enabling point and shoot shopping with smartphone cameras. If you’re not familiar with Amazon Flow,… Read More
A Fish Drives a Car
Posted in: Today's ChiliA while back we featured a robot vehicle that was made to be driven by a parrot. This one’s meant for fishes. Image recognition specialist Studio diip made Fish on Wheels to showcase its prowess in its field. The vehicle moves by following the movement of the fish inside the tank.
A webcam positioned above the tank feeds video to a Beagleboard, which analyzes the position of the fish by contrasting the animal’s body with the bottom of the tank. The instructions are then sent to the vehicle itself, which is powered by an Arduino.
This summer, watch as Nemo is awakened from cryogenic sleep in 2099 to search for the remains of Wall-E under the ocean and use it to beat the tyrannical ruler Lightning McQueen in Finding Nemo 2: Cars 2: Wall-E 2: Judgment Day.
[via Studio diip via prosthetic knowledge]
It’s only a matter of time before things go the way of Skynet, and this new algorithm is a stepping stone along the way: it can learn to identify objects all by itself, with zero human help. Gulp.
Virgin Mobile’s YouTube page can–if you let it–use your webcam to tell when you blink and change t
Posted in: Today's ChiliVirgin Mobile’s YouTube page can—if you let it—use your webcam to tell when you blink and change the ads each time you do so. It’s creepy, but kinda cool.
Computers and sensors are quickly decreasing in cost and size, making it easier than ever before to build smart gadgets or robots. From accelerometers to thermal sensors, electronics nowadays can detect and record a variety of events and objects in their surroundings. Here’s one more sensor to add to your robot overlord-in-training. It’s called Pixy, a camera that identifies objects through color.
Pixy was made by Charmed Labs and embedded systems experts from Carnegie Mellon University. It’s actually the team’s fifth version of a smart and low-cost vision sensor, which they previously called the CMUcam. What separates the Pixy from other image sensors is that it only sends a small amount of data and it has its own microprocessor. These traits make it possible to integrate the Pixy even to microcontrollers like the Arduino.
Pixy identifies objects using “a hue-based color filtering algorithm”, which supposedly makes it consistent under different lighting conditions. It can also identify hundreds of objects at once. The image below is a screenshot of PixyMon, an open source debugging program for Pixy.
As you’ll see in the video below, Pixy can also track moving objects. That’s because it updates once every 20ms, fast enough to keep up with an object moving at 30mph. You can then gather Pixy’s data through UART serial, SPI, I2C, digital out, or analog out.
Pixy can be taught to “remember” up to seven different objects, but you can expand its memory by using color codes. Color codes are simply stickers or strips of paper with two or more different colors. Color codes increase Pixy’s color-coded encyclopedia from seven to several thousands.
Pledge at least $59 (USD) on Kickstarter to get a Pixy and an Arduino cable as a reward.
What will you build with Pixy? A security camera that texts you when your cat goes out? A color-seeking water bomb? A clown-loving machine? A drone that follows you around? A box of crayons that can tell you what color you picked? A weapon that works only on people wearing red? A LEGO sorter that can tell you which pieces are missing from your collection? A camera that automatically takes pictures of the sunset? A wearable assistant for colorblind people? A ticker that counts which Premier League referee hands out the most yellow cards? A useless machine that won’t turn itself off if you’re wearing the right color? Are the things I’m saying even possible?
Googlerola buys Viewdle, ups Android’s augmented reality and face recognition game
Posted in: Today's ChiliFrom existing tech like Face Unlock and Google Goggles to patent filings and Project Glass, it’s clear that Google sees augmented reality and image recognition playing a big part in our computing future. It makes sense, then, that Big G subsidiary Motorola has bought Viewdle — a Silicon Valley company that builds face, object, and gesture recognition technology for mobile devices. We don’t know how much MMI paid for Viewdle, but we do know, thanks to a statement obtained by the good folks at TechCrunch, that the two firms “have been collaborating for some time.” So, hopefully Android will reap the benefits (and fix those Face Unlock flaws) in the not-so-distant future.
Filed under: Cellphones, Mobile
Googlerola buys Viewdle, ups Android’s augmented reality and face recognition game originally appeared on Engadget on Wed, 03 Oct 2012 21:17:00 EDT. Please see our terms for use of feeds.
In the not-too-distant future, technology might let you check out for your purchases without any need to scan tags, enter prices, or even read RFID tags. Thanks to visual recognition technology, items being purchased could be automatically identified just by the way they look.
A trial is underway at a bakery in Tokyo using Brain Corporation’s object recognition technology to automatically ring up items for purchase just by setting them onto a tray. A camera grabs an image of the items, and checks a database to match up the baked goods with their pricing. It works surprisingly well handling subtle variants of the same item – like 2 different loaves of bread. It’s a cool idea, and seems to work quite well in this particular application.
While I like the general concept, I could see problems with the system if you start dealing with multiple items that look the same on the outside, but have different insides (i.e. different memory configurations on an iPhone, or in this case a cherry croissant vs. a chocolate one.) Still, for items which can be identified by color, size and shape, it’s definitely got potential.
[via DigInfo TV]