Seawater Antenna Squirts Radio Signals

Naval tech researcher Spawar has figured out how to use a jet of seawater as an antenna. By squirting a measured length of saltwater through a “current-probe”, various antennae can be formed which will transmit and receive radio-waves on the UHF, VHF and HF frequencies.

Why bother? Ships have limited space for mounting antennae, and these have to be carefully positioned so they don’t interfere with one another. A seawater antenna can be set up anywhere on deck with minimum effort and gear, and uses something that is in ample supply aboard a ship – gallons and gallons of the briny deep.

To turn a squirt of saltwater into an antenna, a jet is fired through a current-probe, essentially a metal donut which uses magnetic induction to transfer the signals to the seawater. This is why it needs to be seawater – saltwater has magnetic induction properties not found in fresh water.

The frequency of the antenna is determined by the height of the water column, and several current-probes can be stacked up, with several different-length streams of water, to broadcast simultaneously on different bands. The big advantage is portability and quick setup, as you don’t need any long metal poles.

Will we see our backyard fountains turned into antennae? Probably not, but it is possible to do. And if you mix your own saltwater you can fire the stream up and enclosed plastic tube, recycling it so you can use one on dry land, although if you have to carry the tube, you may as well just carry a regular antenna. And who knows? Given the all the problems of antennagate, maybe the iPhone 5 will come with it’s own super-soaker-style aerial?

Sea Water Antenna System [Spawar via George Lazenby. Yes, that George Lazenby]

See Also:


How Motion Detection Works in Xbox Kinect

The prototype for Microsoft’s Kinect camera and microphone famously cost $30,000. At midnight Thursday morning, you’ll be able to buy it for $150 as an Xbox 360 peripheral.

Microsoft is projecting that it will sell 5 million units between now and Christmas. We’ll have more details and a review of the system soon, but for now it’s worth taking some time to think about how it all works.

Camera

Kinect’s camera is powered by both hardware and software. And it does two things: generate a three-dimensional (moving) image of the objects in its field of view, and recognize (moving) human beings among those objects.

Older software programs used differences in color and texture to distinguish objects from their backgrounds. PrimeSense, the company whose tech powers Kinect, and recent Microsoft acquisition Canesta use a different model. The camera transmits invisible near-infrared light and measures its “time of flight” after it reflects off the objects.

Time-of-flight works like sonar: If you know how long the light takes to return, you know how far away an object is. Cast a big field, with lots of pings going back and forth at the speed of light, and you can know how far away a lot of objects are.

Using an infrared generator also partially solves the problem of ambient light. Since the sensor isn’t designed to register visible light, it doesn’t get quite as many false positives.

PrimeSense and Kinect go one step further and encode information in the near-IR light. As that information is returned, some of it is deformed — which in turn can help generate a finer image of those objects’ 3-D texture, not just their depth.

With this tech, Kinect can distinguish objects’ depth within 1 centimeter and their height and width within 3 mm.

Story continues …


Tonight’s Release, Xbox Kinect: How Does It Work?

The prototype for Microsoft’s Kinect camera and microphone famously cost $30,000. At midnight tonight, the company is releasing it as a motion-capture Xbox 360 peripheral for $150.

Microsoft is projecting that it will sell five million units between now and Christmas. It’s worth taking some time to think about what’s happening here.

I’ve used Kinect to play video games without a controller, watch digital movies without a remote, and do audio-video chat from across the room. I’ve spent even more time researching the technology behind it and explaining how it works.

Kinect’s camera is powered by both hardware and software. And it does two things: generate a three-dimensional (moving) image of the objects in its field-of-view and recognize (moving) human beings among those objects.

Older software programs used differences in color and texture to distinguish objects from their backgrounds. PrimeSense, the company whose tech powers Kinect, and recent Microsoft acquisition Canesta use a different model. The camera transmits invisible near-infrared light and measures its time of flight after it reflects off the objects.

Time-of-flight works like sonar: if you know how long the light takes to return, you know how far away an object is. Cast a big field, with lots of pings going back and forth at the speed of light, and you can know how far away a lot of objects are.

Using an infrared generator also partially solves the problem of ambient light, which can throw off recognition like a random finger on a touchscreen: the sensor really isn’t designed to register visible light, so it doesn’t get quite as many false positives.

PrimeSense and Kinect go one step further and encode information in the near-IR light. As that information is returned, some of it is deformed — which in turn can help generate a finer image of those objects’ three-dimensional texture, not just their depth.

With this tech, Kinect can distinguish objects’ depth within 1cm and their height and width within 3mm.

Figure from PrimeSense Explaining the PrimeSensor Reference Design.

At this point, both the Kinect’s hardware — its camera and IR light projector — and its firmware (sometimes called “middleware”) of the receiver are operating. It has an onboard processor which is using algorithms to process the data to render the three-dimensional image.

The middleware also can recognize people: both distinguishing human body parts, joints, and movements and distinguishing individual human faces from one another. When you step in front of it, the camera knows who you are.

Please note: I’m keenly aware here of the standard caution against anthropomorphizing inanimate objects. But at a certain point, we have to accept that if the meaning of “to know” is its use, in the sense of familiarity, connaissance, whatever you want to call it, functionally, this camera knows who you are. It’s got your image — a kind of biometric — and can map it to a persona with very limited encounters, as naturally and nearly as accurately as a street cop looking at your mug shot and fingerprints.

Does it “know” you in the sense of embodied neurons firing, or the way your mother knows your personality or your priest your soul? Of course not. It’s a video game.

But it’s a pretty remarkable video game. You can’t quite get the fine detail of a table tennis slice, but the first iteration of the WiiMote couldn’t get that either. And all the jury-rigged foot pads and Nunchuks strapped to thighs can’t capture whole-body running or dancing like Kinect can.

That’s where the Xbox’s processor comes in: translating the movements captured by the Kinect camera into meaningful on-screen events. These are context-specific. If a river rafting game requires jumping and leaning, it’s going to look for jumping and leaning. If navigating a Netflix Watch Instantly menu requires horizontal and vertical hand-waving, that’s what will register on the screen.

It has an easier time recognizing some gestures and postures than others. As Kotaku noted this summer, recognizing human movement — at least, any movement more subtle than a hand-wave — is easier to do when someone is standing up (with all of their joints articulated) than sitting down.

So you can move your arms to navigate menus, watch TV and movies, or browse the internet. You can’t sit on the couch wiggling your thumbs and pretending you’re playing Street Fighter II. It’s not a magic trick cooked up by MI-6. It’s a camera that costs $150.

Continue Reading…

BERG/Dentsu’s Incidental Media Sees Screens’ and Paper’s Playful Future

Cheap print, networked screens and location-aware hardware could create a world where dynamic text is everywhere — as ubiquitous and natural as our current media ecosystem of street signs, alarm clocks, news tickers and train tickets.

Design futurists BERG and ad agency Dentsu London, the team behind iPad Light Painting, have released two new videos for their “Making Future Magic” campaign. This two-part series on media surfaces includes “Incidental Media” and “The Journey.

“In contrast to a Minority Report future of aggressive messages competing for a conspicuously finite attention,” writes Berg’s Jack Schulze, “these sketches show a landscape of ignorable surfaces capitalising on their context, timing and your history to quietly play and present in the corners of our lives.”

“All surfaces have access to connectivity,” Schulze adds. “All surfaces are displays responsive to people, context, and timing. If any surface could show anything, would the loudest or the most polite win? Surfaces which show the smartest, most relevant material in any given context will be the most warmly received.”

I’m particularly taken with the use of paper ephemera in both concept videos. The shift to networked digital communication is usually identified with a shift away from paper and to the screen, when it’s actually anything but. If the identity-specific, instant-update expectations of what Schulze calls “app culture” were translated to print ephemera like coffee-shop receipts and train tickets — and I think that translation is inevitable — we start to see a new phase of print: really, a new kind of publishing.

I could spend paragraphs annotating each of the ideas and all of the tech here — none of it new, just reconfigured — but you’d be better off reading the BERG blog posts above instead.

As an American who regularly travels the postwar-era east coast regional rail system, for whom a Virgin Rail trip from London to Birmingham is already a kind of unimaginable, delightful future, this video leaves me with wonder. And not just wonder: patient reassurance that the future is already on the way.

See Also:


Gallery: Let Your Children Play With Robots

<< Previous
|
Next >>


movellan-with-qrio


Robots can help children become smarter and happier. Javier Movellan, who has spent the better part of the last three decades playing with kids and robots, is sure of it.

Movellan, an associate professor affiliated with UC San Diego’s Machine Perception Laboratory, is a psychologist and a robotics researcher. He studies children’s interactions with robots for two reasons: to better understand childhood development and to build better robots. He has found that emotion and interactivity are more important to kids than humanoid appearance or abstract intelligence. Movellan answered Wired.com’s questions about his work by e-mail.

Image: Javier Movellan with Qrio Robot at UCSD’s Early Childhood Education Center. Credit: UCSD Machine Perception Lab.

<< Previous
|
Next >>


Robot Duo Make Pancakes From Scratch

Household robots could become a reality sooner than we think. But the first hurdle they would have to clear is to prove they can make great pancakes. After all, the breakfast is the most important meal of the day.

A demo posted by Willow Garage, a Palo Alto, California, robotic company, shows two robots working together to make pancakes from a mix. The robots — James and Rosie — even flipped the pancakes correctly.

As you can see in the video (the fun stuff begins at the 1:26 mark) the James robot opened and closed cupboards and drawers, removed the pancake mix from the refrigerator, and handed it to Rosie.

Rosie the robot cooks and flips the pancakes and gives them back to James. Watch for that moment of suspense when Rosie is about to flip the pancake (at the 8:35 mark) and the spontaneous applause from the onlookers when the robot gets it right.

“Behind this domestic tableau is a demonstration of the capabilities of service bots,” says Willow Garage on its blog. “This includes characteristics such as learning, probabilistic inference and action planning.”

For a robot, learning how to flip a pancake is quite a task. Earlier this year, two researchers at the Italian Institute of Technology taught a robot how to do it. The robot had to hold its hand stiffly to throw the pancake in the air and then flex the hand just enough so it could catch the pancake without having it bounce off the pan. It took that robot about 50 tries to get it right.

The latest experiment brought together two different robots: James, a $400,000 robot from Willow Garage and Rosie, a robot from the Technical University Munich. The two robots are among the most sophisticated and advanced humanoid robots today.

James has two stereo camera pairs in its head. The four 5-megapixel cameras are supplemented with a tilting laser range finder. Each of the robot’s forearms has an Ethernet wide-angle camera, while the grippers at the tip have three-axis accelerometers and pressure-sensor arrays on the fingertips. At the base of the robot is another laser range finder.

The PR2 is powered by two eight-core i7 Xeon system servers on-board, 48 GB of memory and a battery system equivalent to 16 laptop batteries or about two hours of battery life.

Rosie has two laser scanners for mapping and navigation, one laser scanner for 3-D laser scans and four cameras, including two 2-megapixel cameras, one stereo-on-chip camera and a Swiss-Ranger SR4000 time-of-flight camera.

The advanced capabilities of the robots came in handy for the task they were assigned. In the demo, one of the robots used the web to solve a cooking problem it faced. The robot looked up a picture on the web and went online to find the cooking instructions for the pancake mix that came from the fridge.

James and Rosie aren’t yet ready “for haute cuisine” say Willow Garage researchers. Nor are they likely to be in your kitchen anytime soon, unless you are ready to pay a couple hundred thousand dollars for a pancake that may not be half as good as the $5 IHOP stack.

But the experiment gives us a pretty good sense of the possibilities. And the robots are cute, besides.

See Also:


Future Shock: Nokia Research Touts 5 Innovative Mobile Interfaces

<< Previous
|
Next >>



A peek into Nokia’s research labs reveals some intriguing possibilities on how we will interact with our devices in the future.

Embedded chips could help phones “smell,” electronically stretchable skins could change the shape of devices and make them fit like gloves on your hand, and gestures could mean the end of pecking and hunting on mobile displays.

Some future touchscreen displays might even give you tactile feedback — using tiny electrical shocks.

So while Nokia may be a bit behind the curve in developing touchscreen interfaces, its R&D department is not standing still.

Check out the five big ideas that are currently under development at Nokia Research Center.

Photo: Andrea Vascellari/Flickr

<< Previous
|
Next >>

See Also:


Future Shock: Five Innovative Mobile Interfaces from Nokia Research

<< Previous
|
Next >>



A peek into Nokia’s research labs reveals some intriguing possibilities on how we will interact with our devices in the future.

Phones could be embedded with chips that can help them “smell,” electronically stretchable skins could change the shape of devices and make them fit like gloves on your hand, and gestures could mean the end of peck and hunt on mobile displays.

Some future touchscreen displays might even give you tactile feedback — via tiny electrical shocks.

So while Nokia may be a bit behind the curve in developing touchscreen interfaces, its R&D department is not standing still.

Check out the five big ideas that are currently under development at Nokia’s labs.

Photo: (Andrea Vascellari/Flickr)

<< Previous
|
Next >>

See Also:


Cyclists’ Airbag Helmets Bursts Forth from Stylish Collar

The Hövding is an airbag for your head. Mounted in a bulky collar, which can be disguised as a stylish scarf, the bag explodes on when you crash and surrounds your delicate melon with an inflated hood. I know there are some drivers out there who hate cyclists, so here’s a video of the Hövding in action, with a sneaky car-driver mowing down an innocent biker.

Hövding means “chieftain” in Swedish, and the air-helmet was designed by Swedes Anna Haupt and Terese Alstin as a university thesis project. The collar contains the bag itself, helium to inflate the airbag and sensors which tell the Hövding when to fire. The sensor unit consists of gyroscopes and accelerometers which constantly monitor movement and deploy to bag when you’re in danger. The Chieftain is charged by USB (firmware can also be updated via the same port) and you switch it on by zipping the collar shut around your neck.

With a car airbag, the time to fire is obvious – when impact is detected. But as you see in the video, there are many ways a cyclist can fall that look similar to normal, safe activities in other contexts: going over the bars and falling forward is a lot like bending down to lock a wheel, for instance. To eliminate false positives, Haupt and Alstin carried out extensive testing with both dummies and – amazingly – stunt men and women.

So why wear this instead of a helmet? Style is the first thing that comes to mind. You can change the covers of the collar to match your outfit, and you won’t muss your hair while you ride. I’d probably feel less safe in this active scarf than I would in a passive, always-on helmet, but the Hövding seems to be reliable, and I don’t wear a helmet anyway.

There’s one more thing that this protector will do: adapt. When you hook up the hood to a USB port, you can choose to upload your “chiefs”. The unit contains a “black-box” which keeps the last ten-seconds of sensor info in a buffer and saves it on impact. This information is then aggregated to improve the performance of the software.

Bonus: fall off a bridge on the way home and you won’t drown, however drunk you are.

How the Hövding works [Hövding via David Report]

See Also:


In Rural China, Students Use Phones to Learn to Read

In many parts of the developing world, mobile phones have leapfrogged literacy, reaching places books and newspapers are rarely seen. In rural China, researchers with the Mobile & Immersive Learning for Literacy in Emerging Economies (MILLEE) Project are using those phones to teach children how to read.

Scholars from Carnegie Mellon, UC-Berkeley, and the Chinese Academy of Sciences worked with children in Xin’an, an underdeveloped region in Henan Province, China, using two mobile learning games, inspired by traditional Chinese children’s games. MILLEE later repeated these studies with young children at a privately run school in urban Beijing. Both runs suggest that phone-based games could be a useful tool in teaching literacy.

According to Carnegie Mellon’s Matthew Kam, despite their comparatively small screens and low computing power, mobile phones could become a major educational resource as wireless carriers and mobile phone manufacturers move aggressively to extend mobile phone penetration across the globe. And if the educational benefits of mobile phones can be demonstrated convincingly, he added, consumers will have an additional motivation for getting mobile phone service, which could further spur mobile phone adoption in developing countries.

First, MILLEE researchers had to create games that would be meaningful and useful for children with little to no experience with either writing or computers. They analyzed 25 traditional Chinese children’s games to identify elements, such as cooperation between players, songs and handmade game objects, for use in the games.

They eventually developed two games: Multimedia Word and Drumming Stroke. In MW, the app provides hints to the children for recognizing characters: This might be a hints at pronunciation, a sketch, a photo or another multimedia object. In Drumming Stroke, children pass the mobile phone to one another to the rhythm of a phone-generated drum sound. Each player writes one stroke of a given Chinese character by following the exact stroke order.

Nokia has sponsored a MILLEE project teaching English literacy to rural children in India using mobile phone-based games, begining with 800 children in 40 villages in southern India’s Andhra Pradesh. MILLEE is also working with the University of Nairobi to explore how the games could be adapted to English literacy learning for rural children in Kenya.

Culturally inspired mobile phone games help Chinese children learn language characters [Carnegie Mellon via EurekAlert]
The MILLEE Classroom [Millee.org] Image via MILLEE.

See Also: