Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam

GPS, a 16 megapixel CMOS sensor, 15x optical zoom — we’ve seen it all before. But a feature that displays places of interest on the camera’s 3-inch LCD? Well, that sounds a bit like augmented reality (AR)! The Fujifilm FinePix F600 EXR’s new Landmark Navigator mode does exactly that, packing one million pre-loaded locations from around the world. Looking to find your way from Rome’s Trevi Fountain to the Spanish Steps? The compact cam will point the way, including other stops along your route. You can also add your own locations, or launch Photo Navigation, which lets you easily return to places you’ve photographed — or plot them on Google Maps once you get home. There’s also 1080p movie capture, a 12,800 ISO high-sensitivity mode (that you’ll probably never want to use), sensor-shift image stabilization, and a 24-360mm lens with an f/3.5 maximum aperture. But as you may have guessed, we’re most excited about those AR features, so jump past the break for the full scoop.

Continue reading Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam

Fujifilm FinePix F600EXR packs photo navigation, augmented reality in a 16 MP digicam originally appeared on Engadget on Thu, 11 Aug 2011 19:48:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceFujifilm  | Email this | Comments

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Lookin’ to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach.

Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed “a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need.” Naturally, this one’s most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices — Microsoft’s Kinect was specifically mentioned, and it doesn’t take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we’d advise you to set aside a few hours (and rest up beforehand).

Continue reading Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted originally appeared on Engadget on Wed, 10 Aug 2011 21:46:00 EDT. Please see our terms for use of feeds.

Permalink Physorg  |  sourceCarnegie Mellon University, Microsoft Research  | Email this | Comments

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

It’s a little shocking to think about the impact that Microsoft’s Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what’s happening here, we’d actually encourage you to hop back and have a gander at our hands-on with PrimeSense’s raw motion sensing hardware from GDC 2010 — for those who’ve forgotten, that very hardware was finally outed as the guts behind what consumers simply know as “Kinect.” The breakthrough wasn’t in how it allowed gamers to control common software titles sans a joystick — the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that’s precisely what a smattering of highly intelligent blokes in the UK have done, and they’ve built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral.

The actual technobabble ran deep — not shocking given the academic nature of the conference — but the demos shown were nothing short of jaw-dropping. There’s no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user’s surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that’s been created here can “extract surface geometry in real-time,” right down to the millimeter level. Of course, the Kinect’s camera and abilities are relatively limited when it comes to resolution; you won’t be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there’s nothing stopping this from scaling with the future. Have a peek at the links below if you’re interested in diving deeper — don’t be shocked if you can’t find the exit, though.

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities originally appeared on Engadget on Tue, 09 Aug 2011 14:48:00 EDT. Please see our terms for use of feeds.

Permalink Developer Fusion  |  sourceMicrosoft Research [PDF]  | Email this | Comments

OutRun AR project lets you game and drive at the same time, makes us drool

Cool game, or coolest game ever? That’s the question we were asking ourselves when we first came across Garnet Hertz’s augmented reality-based OutRun project — a concept car that weds Sega’s classic driving game with an electric golf cart, allowing players to navigate their way around real-life courses using only arcade consoles. Hertz, an informatics researcher at the University of California Irvine, has since brought his idea to fruition, after outfitting the system with cameras and customized software that can “look” in front of the car to automatically reproduce the route on the game cabin’s screen. The map is displayed in the same 8-bit rendering you’d see on the original OutRun, with perspectives changing proportionally to shifts in steering. The cart maxes out at only 13 mph, though speed isn’t really the idea; Hertz and his colleagues hope their technology can be used to develop game-based therapies for disabled users, or to create similarly AR-based wheelchairs. Scoot past the break to see a video of the car in action, and let your dreams converge.

[Thanks, Stagueve]

Continue reading OutRun AR project lets you game and drive at the same time, makes us drool

OutRun AR project lets you game and drive at the same time, makes us drool originally appeared on Engadget on Wed, 03 Aug 2011 09:42:00 EDT. Please see our terms for use of feeds.

Permalink Nowhere Else (Translated)  |  sourceOutRun  | Email this | Comments

Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred

Some people never forget a face and the same, it seems, can be said for the internet. With some off-the-shelf facial recognition software, a connection to the cloud and access to social networking data, Carnegie Mellon University researchers have proved tagging can be the everyman’s gateway to privacy violation. Using a specially-designed, AR-capable mobile app, Prof. Alessandro Acquisti and his team conducted three real-world trials of the personal info mining tech, successfully identifying pseudonymed online daters and campus strolling college students via Facebook. In some cases, the application was even able to dredge up the students’ social security digits and personal interests — from their MySpace pages, we assume. Sure, the study’s findings could have you running for the off-the-grid hills (not to mention the plastic surgeon), but it’s probably best you just pay careful attention to that digital second life. Full PR after the break.

Continue reading Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred

Carnegie Mellon researchers use photo-tagging to violate privacy, prove nothing social is sacred originally appeared on Engadget on Mon, 01 Aug 2011 19:07:00 EDT. Please see our terms for use of feeds.

Permalink Forbes blogs  |   | Email this | Comments

Microsoft licenses GeoVector’s augmented reality search for local guidance (video)

After the ho-hum AR demonstration of Windows Phone Mango, Microsoft appears to be stepping up its game by licensing a mature set of technologies from GeoVector, (a company previously known for its defunct World Surfer application). While the details remain elusive, Ballmer’s crew was granted a multi-year, non-exclusive right to use and abuse the pointing-based local search and augmented reality elements of GeoVector’s portfolio — surely capable of bringing Local Scout to the next level. While much of the technology relies on GPS and a compass for directional-based discovery, the licensor also holds intellectual property for object recognition (à la Google Goggles), although it’s unclear whether this element falls within the agreement. Of course, Microsoft could have turned to Nokia’s Live View AR for many of the same tools, but that would have been far too obvious. Just beyond the break, you’ll find the full PR along with an (admittedly dated) video of GeoVector’s technology.

Continue reading Microsoft licenses GeoVector’s augmented reality search for local guidance (video)

Microsoft licenses GeoVector’s augmented reality search for local guidance (video) originally appeared on Engadget on Thu, 14 Jul 2011 11:13:00 EDT. Please see our terms for use of feeds.

Permalink SlashGear  |   | Email this | Comments

Nokia’s Live View AR app reveals what’s nearby, how to socially ostracize yourself in public

Augmented reality junkie, Ovi Maps fan and S^3 fanboy? Nokia’s got you covered with its Live View AR app. The most recent hatchling from Espoo’s Beta Labs program brings selectable POI overlays to the camera inputs of a N8, C7 or E7. The Finnish firm also highlights the release’s tight integration with Ovi Maps, with deep hooks for turn-by-turn navigation and sharing — allowing you to spam friends as to your future whereabouts via SMS. Interest piqued? A video demoing the application and an interesting way to calibrate a compass awaits you beyond the fold.

Continue reading Nokia’s Live View AR app reveals what’s nearby, how to socially ostracize yourself in public

Nokia’s Live View AR app reveals what’s nearby, how to socially ostracize yourself in public originally appeared on Engadget on Wed, 13 Jul 2011 03:17:00 EDT. Please see our terms for use of feeds.

Permalink All About Symbian  |  sourceNokia Beta Labs (1), (2)  | Email this | Comments

Social x-ray glasses can decode emotions, make your blind dates less awkward

You may consider yourself a world-class liar, but a new pair of “social x-ray” glasses could soon expose you for the fraud you really are. Originally designed for people suffering from autism, these specs use a rice grain-sized camera to pick up on a person’s 24 “feature points” — facial expressions that convey feelings of confusion, agreement and concentration, among others. Once recognized, these signals are analyzed by software, compared against a database of known expressions and then relayed to users via an attached headphone. If their date starts to feel uncomfortable, a blinking red light lets them know that it’s time to shut up. Rosalina Picard, an electrical engineer who developed the prototype with Rana el Kaliouby, acknowledged that her algorithm still needs some fine tuning, but told New Scientist that the glasses have already proved popular with autistic users, who often have difficulty deciphering others’ body language. No word yet on when these social specs could hit the market, but they’ll probably make us even more anti-social once they do.

Social x-ray glasses can decode emotions, make your blind dates less awkward originally appeared on Engadget on Tue, 12 Jul 2011 04:18:00 EDT. Please see our terms for use of feeds.

Permalink CNET  |  sourceNew Scientist  | Email this | Comments

Kinect app promises you’ll wear flowery skirts, and you’ll like it (video)

Don’t be shy now: which of you doesn’t love raiding your mother’s closet and trying on her paisley dresses and velour tracksuits? That’s more or less the idea behind Virtual Dressing Room, a Kinect program that taps into the clandestine thrill of sneaking into other people’s boudoirs. Unlike some other shopping hacks we’ve seen, the app goes beyond just pilling on 2D pieces, but uses 3D models so that the items mold to your limbs, with the shadows and creases in the virtual fabric changing as you preen for the camera. That all comes courtesy of a special physics engine, while the app itself was written in C# along with Microsoft’s XNA tools. Arbuzz, the group that dreamed this up, says the project’s still a work in progress, though we can see this, too, being used to relieve those of who are allergic to shopping malls. Until then, you’ll just have to settle for watching some other guy work a knee-length skirt.

Continue reading Kinect app promises you’ll wear flowery skirts, and you’ll like it (video)

Kinect app promises you’ll wear flowery skirts, and you’ll like it (video) originally appeared on Engadget on Fri, 08 Jul 2011 23:46:00 EDT. Please see our terms for use of feeds.

Permalink Hack A Day  |  sourceArbuzz  | Email this | Comments

Apple seeks to spruce up the real world with interactive augmented reality, has the patent apps to prove it

When we go somewhere new, we wish we could spend more time taking in the sights and less time looking at our phone for directions and info about our surroundings. Apple’s well aware of this conundrum, and has filed a couple of patent applications to let you ogle your environment while telling you where to go and what you’re seeing. One app is a method for combining augmented reality (AR) information and real time video while allowing users to interact with the images on screen — so you can shoot a vid of a city skyline with your iPhone, touch a building where you want to go, and let it show you the way there. The second patent application is for a device with an LCD display capable of creating a transparent window, where the opacity of the screen’s pixels is changed by varying the voltage levels driving them. Such a display could overlay interactive info about what you see through the window, so you can actually look at the Mona Lisa while reading up on her mysterious grin. Of course, these are just patent applications, so we probably won’t be seeing any AR-optimized iDevices anytime soon (if ever), but we can dream, right?

Apple seeks to spruce up the real world with interactive augmented reality, has the patent apps to prove it originally appeared on Engadget on Fri, 08 Jul 2011 12:08:00 EDT. Please see our terms for use of feeds.

Permalink Apple Insider  |  sourceUSPTO (1), (2)  | Email this | Comments