SETI comes back from the financial dead, gets a check from Jodie Foster

Roswell devotees, dry those tears — the search for alien overlords frenemies is back on. Four months after going into financial “hibernation,” SETI’s Allen Telescope Array has been temporarily resuscitated thanks to an infusion of publicly raised funds from the SETIStars program, and Ms. Jodie Foster. The web campaign for those-who-believe raised over $200,000 in just 45 days, enough cash to get the Paul Allen-funded dishes scanning the skies for at least five more months. Tom Pierson, the institute’s CEO, is hoping to secure long-term funding for the project from the U.S. Air Force, which could use the array during the daytime “to track orbital objects that otherwise might pose a threat to the International Space Station and other satellites.” However Pierson manages to keep the fleet of skyward-facing ears afloat, one thing’s for sure — the truth is out there and tracking it’s a hustle.

SETI comes back from the financial dead, gets a check from Jodie Foster originally appeared on Engadget on Thu, 11 Aug 2011 23:47:00 EDT. Please see our terms for use of feeds.

Permalink Forbes  |  sourceCosmic Log  | Email this | Comments

Microsoft Surface-controlled robots to boldly go where rescuers have gone before (video)

Ready to get hands-on in the danger zone — from afar? That’s precisely what an enterprising team of University of Massachusetts Lowell researchers are working to achieve with a little Redmond-supplied assistance. The Robotics Lab project, dubbed the Dynamically Resizing Ergonomic and Multi-touch (DREAM) Controller, makes use of Microsoft’s Surface and Robotics Developer Studio to deploy and coordinate gesture-controlled search-and-rescue bots for potentially hazardous emergency response situations. Developed by Prof. Holly Yanco and Mark Micire, the tech’s Natural User Interface maps a virtual joystick to a user’s fingertips, delegating movement control to one hand and vision to the other — much like an Xbox controller. The project’s been under development for some time, having already aided rescue efforts during Hurricane Katrina, and with future refinements, could sufficiently lower the element of risk for first responders. Head past the break for a video demonstration of this life-saving research.

Continue reading Microsoft Surface-controlled robots to boldly go where rescuers have gone before (video)

Microsoft Surface-controlled robots to boldly go where rescuers have gone before (video) originally appeared on Engadget on Thu, 11 Aug 2011 18:02:00 EDT. Please see our terms for use of feeds.

Permalink Microsoft Research Connections Blog  |  sourceUML Robotics Lab  | Email this | Comments

Surround Haptics could bring force feedback to vests, coasters and gaming (video)

Haptics and gaming have gone hand in hand for centuries it seems — well before the Rumble Pak made itself an N64 staple, we vividly recall snapping up a vibration jumpsuit for our Sega Genesis. ‘Course, it was on clearance for a reason. Ali Israr et al. were on hand here at SIGGRAPH’s E-tech conference to demonstrate the next big leap in haptics, joining hands with Disney Research in order to showcase a buzzing game chair for use with Split/Second. The seat shown in the gallery (and video) below cost around $5,000 to concoct, with well over a dozen high-end coils tucked neatly into what looked to be a snazzy padding set for an otherwise uneventful seating apparatus.

We sat down with members of the research team here in Vancouver, and while the gaming demo was certainly interesting, it’s really just the tip of the proverbial iceberg. The outgoing engineers from Black Rock Studios helped the team wire stereoscopic audio triggers to the sensors, with a left crash, right scrape and a head-on collision causing the internal coils to react accordingly. Admittedly, the demo worked well, but it didn’t exactly feel comfortable. In other words — we can’t exactly say we’d be first in line to pick one of these up for our living room.

Continue reading Surround Haptics could bring force feedback to vests, coasters and gaming (video)

Filed under:

Surround Haptics could bring force feedback to vests, coasters and gaming (video) originally appeared on Engadget on Thu, 11 Aug 2011 15:25:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Sony’s Face-to-Avatar blimp soars through SIGGRAPH, melts the heart of Big Brother (video)

Telepresence, say hello to your future. Humans, say hello to the next generation of Chancellor Sutler. All jesting aside, there’s no question that Big Brother came to mind when eying Sony Computer Science Laboratories’ Face-to-Avatar concept at SIGGRAPH. For all intents and purposes, it’s a motorized blimp with a front-facing camera, microphone, a built-in projector and a WiFi module. It’s capable of hovering above crowds in order to showcase an image of what’s below, or displaying an image of whatever’s being streamed to its wireless apparatus. The folks we spoke to seemed to think that it was still a few years out from being in a marketable state, but we can think of a few governments who’d probably be down to buy in right now. Kidding. Ominous video (and static male figurehead) await you after the break.

Continue reading Sony’s Face-to-Avatar blimp soars through SIGGRAPH, melts the heart of Big Brother (video)

Sony’s Face-to-Avatar blimp soars through SIGGRAPH, melts the heart of Big Brother (video) originally appeared on Engadget on Thu, 11 Aug 2011 13:30:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Researchers use children’s toy to exploit security hole in feds’ radios, eavesdrop on conversations

Researchers from the University of Pennsylvania have discovered a potentially major security flaw in the radios used by federal agents, as part of a new study that’s sure to raise some eyebrows within the intelligence community. Computer science professor Matt Blaze and his team uncovered the vulnerability after examining a set of handheld and in-car radios used by law enforcement officials in two, undisclosed metropolitan areas. The devices, which operate on a wireless standard known as Project 25 (P25), suffer from a relatively simple design flaw, with indicators and switches that don’t always make it clear whether transmissions are encrypted. And, because these missives are sent in segments, a hacker could jam an entire message by blocking just one of its pieces, without expending too much power. What’s really shocking, however, is that the researchers were able to jam messages and track the location of agents using only a $30 IM Me texting device, designed for kids (pictured above). After listening in on sensitive conversations from officials at the Department of Justice and the Department of Homeland Security, Barnes and his team have called for a “substantial top-to-bottom redesign” of the P25 system and have notified the agencies in question. The FBI has yet to comment on the study, but you can read the whole thing for yourself, at the link below.

Researchers use children’s toy to exploit security hole in feds’ radios, eavesdrop on conversations originally appeared on Engadget on Thu, 11 Aug 2011 11:40:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceThe Wall Street Journal  | Email this | Comments

MoleBot interactive gaming table hooks up with Kinect, puts Milton Bradley on watch (video)

Looking to spruce up that nondescript living room table? So are a smattering of folks from the Korea Advanced Institute of Science and Technology. At this week’s SIGGRAPH E-tech event, a team from the entity dropped by to showcase the deadly cute MoleBot table. At its simplest, it’s a clever tabletop game designed to entertain folks aged 3 to 103; at the other extreme, it’s a radically new way of using Microsoft’s Kinect to interact with something that could double as a place to set your supper. Improving on similar projects in the past, this shape-display method uses a two-dimensional translating cam (mole cam), 15,000 closely packed hexagonal pins equivalent to cam followers, and a layer of spandex between the mole cam and the pins to reduce friction.

When we dropped by, the Kinect mode was disabled in favor of using an actual joystick to move the ground below. In theory, one could hover above the table and use hand gestures to move the “mole,” shifting to and fro in order to pick up magnetic balls and eventually affix the “tail” onto the kitty. The folks we spoke with seemed to think that there’s consumer promise here, as well as potential for daycares, arcades and other locales where entertaining young ones is a priority. Have a peek at a brief demonstration vid just after the break, and yes, you can bet we’ll keep you abreast of the whole “on sale” situation.

Continue reading MoleBot interactive gaming table hooks up with Kinect, puts Milton Bradley on watch (video)

MoleBot interactive gaming table hooks up with Kinect, puts Milton Bradley on watch (video) originally appeared on Engadget on Thu, 11 Aug 2011 08:55:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

InteractiveTop brings tabletop gaming to SIGGRAPH, doubles as Inception token (video)

MoleTop a little too passive for you? Fret not, as a team from The University of Electro-Communications popped by this year’s installment of SIGGRAPH in order to showcase something entirely more vicious. It’s air hockey meets bumper cars, and the InteractiveTop demo was certainly one of the stranger ones we came across here in Vancouver. Put simply, it’s a virtual game of spinning tops, where users use magnet-loaded controllers to shuffle tops across a board and into an opponent’s top. There’s an aural and haptic feedback mechanism to let you know when you’ve struck, and plenty of sensors loaded throughout to keep track of collisions, force and who’s hitting who. Pore over the links below for more technobabble, or just head past the break for an in-action video.

Continue reading InteractiveTop brings tabletop gaming to SIGGRAPH, doubles as Inception token (video)

InteractiveTop brings tabletop gaming to SIGGRAPH, doubles as Inception token (video) originally appeared on Engadget on Thu, 11 Aug 2011 08:05:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceProject InteractiveTop  | Email this | Comments

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Lookin’ to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach.

Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed “a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need.” Naturally, this one’s most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices — Microsoft’s Kinect was specifically mentioned, and it doesn’t take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we’d advise you to set aside a few hours (and rest up beforehand).

Continue reading Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted originally appeared on Engadget on Wed, 10 Aug 2011 21:46:00 EDT. Please see our terms for use of feeds.

Permalink Physorg  |  sourceCarnegie Mellon University, Microsoft Research  | Email this | Comments

PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video)

Think SIGGRAPH‘s all about far-out design concepts? Think again. A crew from the Tokyo Metropolitan University IDEEA Lab was on hand here at the show’s experimental wing showcasing a new “musical interface,” one that’s highly tactile and darn near impossible to walk away from. Upon first glance, it reminded us most of Yamaha’s Tenori-On, but the “universal input / output box” is actually far deeper and somewhat more interactive in use. A grand total of 16 solenoids are loaded in, and every one of ’em are loaded up with sensors.

Users can tap any button to create a downbeat (behind the scenes, a sequencer flips to “on”), which will rise in unison with the music until you tap it once more to settle it (and in turn, eliminate said beat). You can grab hold of a peg in order to sustain a given note until you let it loose. There’s a few pitch / tone buttons that serve an extra purpose — one that we’re sure you can guess by their names. Those are capable of spinning left and right, with pitch shifting and speeds increasing / decreasing with your movements. The learning curve here is practically nonexistent, and while folks at the booth had no hard information regarding an on-sale date, they confirmed to us that hawking it is most certainly on the roadmap… somewhere. Head on past the break for your daily (video) dose of cacophony.

Continue reading PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video)

PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video) originally appeared on Engadget on Wed, 10 Aug 2011 18:50:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

It’s a little shocking to think about the impact that Microsoft’s Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what’s happening here, we’d actually encourage you to hop back and have a gander at our hands-on with PrimeSense’s raw motion sensing hardware from GDC 2010 — for those who’ve forgotten, that very hardware was finally outed as the guts behind what consumers simply know as “Kinect.” The breakthrough wasn’t in how it allowed gamers to control common software titles sans a joystick — the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that’s precisely what a smattering of highly intelligent blokes in the UK have done, and they’ve built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral.

The actual technobabble ran deep — not shocking given the academic nature of the conference — but the demos shown were nothing short of jaw-dropping. There’s no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user’s surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that’s been created here can “extract surface geometry in real-time,” right down to the millimeter level. Of course, the Kinect’s camera and abilities are relatively limited when it comes to resolution; you won’t be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there’s nothing stopping this from scaling with the future. Have a peek at the links below if you’re interested in diving deeper — don’t be shocked if you can’t find the exit, though.

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities originally appeared on Engadget on Tue, 09 Aug 2011 14:48:00 EDT. Please see our terms for use of feeds.

Permalink Developer Fusion  |  sourceMicrosoft Research [PDF]  | Email this | Comments