Sphero the smartphone controlled ball gets ready to roll out, we go hands-on (video)

The plucky little white ball that first rolled its way into our hearts back at CES is back, and now it’s getting ready to continue its journey onto store shelves. Sphero is a little plasticLED-lit orb that can be controlled using a number of smartphone applications. The toy’s makers like to refer to it as a “real-world Wii,” letting users control it either via a phone’s touchscreen or with gestures, using the handset’s accelerometer. The ball itself is palm-sized — it feels like a standard toy ball, until you give it a bit of a shake, feeling its insides jiggle.

At present, the company is showcasing three apps — one for standard driving in real-time, one that lets the user draw paths with their fingers and a third “golf” app, that offers the most Wii-like interaction, with the user swinging their smartphone like a club to move the ball. The apps are straightforward and let you change Sphero’s color. All in all, the company seems to have come a ways since first showcasing earlier prototypes back in January. You can expect to see Sphero start shipping before the end of the year, for $129 a pop. It will be compatible with both iOS and Android. We hand fun with the thing, but who knows how long it will take to get sick of it. Thankfully, it will launch with three to six apps, with more coming soon. Hands-on video after the break.

Continue reading Sphero the smartphone controlled ball gets ready to roll out, we go hands-on (video)

Sphero the smartphone controlled ball gets ready to roll out, we go hands-on (video) originally appeared on Engadget on Wed, 14 Sep 2011 19:26:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Touch Vision Interface employs AR to control screens from afar

We’re not exactly lacking in ways to interact with a screen from afar, but the folks at Teehan+Lax have now put an augmented reality-enhanced spin on things with their so-called Touch Vision Interface. While the “how” behind it is no doubt complicated (and being kept largely under wraps at the moment), the end result is fairly simple: you just point your smartphone at a screen (or two) and start manipulating it from the point of view provided by the phone’s camera. Of course, it’s all still in the early stages right now, but group sees a wide range of applications for the system — even including large outdoor billboards. Check it out in action in the video after the break.

Continue reading Touch Vision Interface employs AR to control screens from afar

Touch Vision Interface employs AR to control screens from afar originally appeared on Engadget on Sun, 11 Sep 2011 08:39:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceTeehan+Lax  | Email this | Comments

AR.Drone control finally comes to Android, lazy quadrocopter enthusiasts rejoice

AR.FreeFlight

The folks at Parrot have been promising us an Android app for the AR.Drone since pretty much day one. Well, it certainly took long enough (it’s been over a year since the app was demoed at Google I/O), but pre-made quadrocopter fans no longer have to reach for unofficial solutions to pilot their unmanned vehicle with their Droids. Sadly, games for the flying augmented reality platform are still MIA, but at least you can fire up AR.FreeFlight and have the $299 UAV tear around your block and annoy your neighbors. But, it shouldn’t take long for some one to whip up something fun with the SDK. Check out the video and PR after the break, and hit up the more coverage link to download the free app now.

Continue reading AR.Drone control finally comes to Android, lazy quadrocopter enthusiasts rejoice

AR.Drone control finally comes to Android, lazy quadrocopter enthusiasts rejoice originally appeared on Engadget on Fri, 02 Sep 2011 16:49:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Sony’s AR tool lets you put big screens in small apartments (video)

It may not be a slick as Panasonic’s dream-TV AR app, but at least Sony’s keeping up with the competition. Live from the company’s UK outfit is an online AR tool enabling you, dear reader, to visualize all sorts of boob tubes you can (and can’t) afford. After printing, affixing and photographing a marker, prospective buyers can get a better sense of what sets fit in their humble abodes. Interested in giving it a go? Mosey on past the break for PR and a video, and then hop beyond the source link to begin your adventure.

[Thanks, Matt]

Continue reading Sony’s AR tool lets you put big screens in small apartments (video)

Sony’s AR tool lets you put big screens in small apartments (video) originally appeared on Engadget on Thu, 25 Aug 2011 09:14:00 EDT. Please see our terms for use of feeds.

Permalink Pocket Lint  |  sourceSony UK  | Email this | Comments

Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam

GPS, a 16 megapixel CMOS sensor, 15x optical zoom — we’ve seen it all before. But a feature that displays places of interest on the camera’s 3-inch LCD? Well, that sounds a bit like augmented reality (AR)! The Fujifilm FinePix F600 EXR’s new Landmark Navigator mode does exactly that, packing one million pre-loaded locations from around the world. Looking to find your way from Rome’s Trevi Fountain to the Spanish Steps? The compact cam will point the way, including other stops along your route. You can also add your own locations, or launch Photo Navigation, which lets you easily return to places you’ve photographed — or plot them on Google Maps once you get home. There’s also 1080p movie capture, a 12,800 ISO high-sensitivity mode (that you’ll probably never want to use), sensor-shift image stabilization, and a 24-360mm lens with an f/3.5 maximum aperture. But as you may have guessed, we’re most excited about those AR features, so jump past the break for the full scoop.

Continue reading Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam

Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam originally appeared on Engadget on Thu, 11 Aug 2011 19:48:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceFujifilm  | Email this | Comments

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Lookin’ to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach.

Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed “a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need.” Naturally, this one’s most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices — Microsoft’s Kinect was specifically mentioned, and it doesn’t take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we’d advise you to set aside a few hours (and rest up beforehand).

Continue reading Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted originally appeared on Engadget on Wed, 10 Aug 2011 21:46:00 EDT. Please see our terms for use of feeds.

Permalink Physorg  |  sourceCarnegie Mellon University, Microsoft Research  | Email this | Comments

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

It’s a little shocking to think about the impact that Microsoft’s Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what’s happening here, we’d actually encourage you to hop back and have a gander at our hands-on with PrimeSense’s raw motion sensing hardware from GDC 2010 — for those who’ve forgotten, that very hardware was finally outed as the guts behind what consumers simply know as “Kinect.” The breakthrough wasn’t in how it allowed gamers to control common software titles sans a joystick — the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that’s precisely what a smattering of highly intelligent blokes in the UK have done, and they’ve built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral.

The actual technobabble ran deep — not shocking given the academic nature of the conference — but the demos shown were nothing short of jaw-dropping. There’s no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user’s surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that’s been created here can “extract surface geometry in real-time,” right down to the millimeter level. Of course, the Kinect’s camera and abilities are relatively limited when it comes to resolution; you won’t be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there’s nothing stopping this from scaling with the future. Have a peek at the links below if you’re interested in diving deeper — don’t be shocked if you can’t find the exit, though.

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities originally appeared on Engadget on Tue, 09 Aug 2011 14:48:00 EDT. Please see our terms for use of feeds.

Permalink Developer Fusion  |  sourceMicrosoft Research [PDF]  | Email this | Comments

OutRun AR project lets you game and drive at the same time, makes us drool

Cool game, or coolest game ever? That’s the question we were asking ourselves when we first came across Garnet Hertz’s augmented reality-based OutRun project — a concept car that weds Sega’s classic driving game with an electric golf cart, allowing players to navigate their way around real-life courses using only arcade consoles. Hertz, an informatics researcher at the University of California Irvine, has since brought his idea to fruition, after outfitting the system with cameras and customized software that can “look” in front of the car to automatically reproduce the route on the game cabin’s screen. The map is displayed in the same 8-bit rendering you’d see on the original OutRun, with perspectives changing proportionally to shifts in steering. The cart maxes out at only 13 mph, though speed isn’t really the idea; Hertz and his colleagues hope their technology can be used to develop game-based therapies for disabled users, or to create similarly AR-based wheelchairs. Scoot past the break to see a video of the car in action, and let your dreams converge.

[Thanks, Stagueve]

Continue reading OutRun AR project lets you game and drive at the same time, makes us drool

OutRun AR project lets you game and drive at the same time, makes us drool originally appeared on Engadget on Wed, 03 Aug 2011 09:42:00 EDT. Please see our terms for use of feeds.

Permalink Nowhere Else (Translated)  |  sourceOutRun  | Email this | Comments

Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred

Some people never forget a face and the same, it seems, can be said for the internet. With some off-the-shelf facial recognition software, a connection to the cloud and access to social networking data, Carnegie Mellon University researchers have proved tagging can be the everyman’s gateway to privacy violation. Using a specially-designed, AR-capable mobile app, Prof. Alessandro Acquisti and his team conducted three real-world trials of the personal info mining tech, successfully identifying pseudonymed online daters and campus strolling college students via Facebook. In some cases, the application was even able to dredge up the students’ social security digits and personal interests — from their MySpace pages, we assume. Sure, the study’s findings could have you running for the off-the-grid hills (not to mention the plastic surgeon), but it’s probably best you just pay careful attention to that digital second life. Full PR after the break.

Continue reading Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred

Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred originally appeared on Engadget on Mon, 01 Aug 2011 19:07:00 EDT. Please see our terms for use of feeds.

Permalink Forbes blogs  |   | Email this | Comments

Microsoft licenses GeoVector’s augmented reality search for local guidance (video)

After the ho-hum AR demonstration of Windows Phone Mango, Microsoft appears to be stepping up its game by licensing a mature set of technologies from GeoVector, (a company previously known for its defunct World Surfer application). While the details remain elusive, Ballmer’s crew was granted a multi-year, non-exclusive right to use and abuse the pointing-based local search and augmented reality elements of GeoVector’s portfolio — surely capable of bringing Local Scout to the next level. While much of the technology relies on GPS and a compass for directional-based discovery, the licensor also holds intellectual property for object recognition (à la Google Goggles), although it’s unclear whether this element falls within the agreement. Of course, Microsoft could have turned to Nokia’s Live View AR for many of the same tools, but that would have been far too obvious. Just beyond the break, you’ll find the full PR along with an (admittedly dated) video of GeoVector’s technology.

Continue reading Microsoft licenses GeoVector’s augmented reality search for local guidance (video)

Microsoft licenses GeoVector’s augmented reality search for local guidance (video) originally appeared on Engadget on Thu, 14 Jul 2011 11:13:00 EDT. Please see our terms for use of feeds.

Permalink SlashGear  |   | Email this | Comments