Robot to Walk 300 miles from Tokyo to Kyoto

Marathons are not just for humans. A robot has decided to take on the challenge of walking 300 miles from Tokyo to Kyoto. The tour is part of a publicity tour organized for Panasonic’s Evolta batteries.

The 7-inch tall very cute humanoid robot that take on this project will be pulling a two-wheeled cart behind it. It’s a tiny machine but hopes to get to the finish line successfully.

The robot originally comes from the stable of Japanese company Robo Garage. It has been constructed using lightweight plastic, carbon fiber and titanium and weighs about 2.2 lbs. The entire machine will be powered using 12 AA batteries and operated using remote control, according to the Pink Tentacle site.

The robot will travel from sunrise to sunset, say the organizers, who will be tweeting its progress (@evoltatoukaidou) and livestreaming the event.

The robot can travel at a rate of about two to three miles an hour. So without any breakdowns or problems, the robot is expected to complete the journey in about 49 days.

It’s not the first time that this robot has undertaken adventure sports. In May 2008, it climed a 1740-ft rope suspended from a cliff at the Grand Canyon and a year later drove for a day around the Le Mans race circuit. All of this has already earned the robot a place in the Guinness World Records book.

For its current adventure, the robot has a wheel circling it so it can move over uneven surfaces. The handcart behind the robot is expected to hold extra batteries. The batteries will have to be recharged at least once every day.

Head over to the Pink Tentacle site to see photos of the very cute Evolta Panasonic robot as it gets ready to head out on its latest project.

Check out some photos of the Evolta from its earlier adventures:

The Evolta covered 14.8 miles at the Le Mans race circuit. Photo Courtesy Panasonic

Evolta robot's creator looks on proudly. Photo courtesy Panasonic

The Evolta robot has already set two records. Photo courtesy Panasonic

See Also:


Rebuilding Bones Stronger and Faster with Titanium Foam

The new titanium foam better imitates the structure of natural bone. Image by Fraunhofer IFAM.

I have a half-dozen titanium plates in my right forearm. They connect a bone graft taken from my left leg to the upper part of my radius and to my wrist. This system isn’t perfect, but it does the job.

When my arm snapped, the lower half of my radius shot out my body; it couldn’t be found, let alone repaired. A full titanium rod would have been stiff, wouldn’t have bonded with the existing bone, and would have been harder to arrange muscles and tendons and nerves and blood vessels around as my arm was rebuilt. Solid metal just isn’t light, porous, malleable like bone. Using an existing bone, from my own body, with its own blood supply, was the surer path to giving my arm some functionality again. So orthopedic surgeons removed my fibula — the thin, “chicken-leg” bone next to the shin that isn’t necessary for walking or even running in humans — and carved it up to make a replacement. Titanium keeps everything together, but it’s not doing most (hardly any) of the structural work.

In many cases, though, this isn’t an option: bone grafts from either the fibula or any other site are the wrong size, shape, or density to be used to strengthen or replace a fractured or missing bone. That’s why surgeons still use titanium rods. Solid metal isn’t as good as bone, but at least it’s as strong as bone.

But what if the titanium were actually structured like bone? Instead of a rod, a foam — strong yet flexible, solid yet porous, composed of a metal alloy but otherwise as similar to bone as possible?

Fraunhofer, a German industrial and medical research firm, has actually created such a substance with their TiFoam project. The titanium foam has a complex internal structure that allows blood vessels and existing bone cells to grow into the foam, integrating them into its own matrix (and vice versa). This makes the foam particularly useful to repair damaged bones that are still partially intact, like the radius in my arm.

For constructing bone replacements or prosthetics, the Titanium foam serves a slightly different function; it become more or less dense as the weight-bearing requirements of the substitute bone demand — meaning, for instance, that a fingertip bone doesn’t need to be as heavy per cubic inch as a femur.

Finally, titanium foam allows for stress to be replaced on the repaired bone immediately. In fact, it requires it: only load-bearing stress can trigger the proper density formation of the graft and integration of the existing bone with the foam, fostering faster and more substantive healing.

On this project, Fraunhofer worked with researchers at the technical university of Dresden, and medical manufacturers InnoTERE; InnoTERE had already announced that they are beginning to develop and produce TiFoam-based bone implants.

See Also:


Computer Servers Could Help Detect Earthquakes

Computer servers in data centers could do more than respond to requests from millions of internet users.

IBM researchers have patented a technique using vibration sensors inside server hard drives to analyze information about earthquakes and predict tsunamis.

“Almost all hard drives have an accelerometer built into them, and all of that data is network-accessible,” says Bob Friedlander, master inventor at IBM. “If we can reach in, grab the data, clean it, network it and analyze it, we can provide very fine-grained pictures of what’s happening in an earthquake.”

The aim is to accurately predict the location and timing of catastrophic events and improve the natural-disaster warning system. Seismographs that are widely used currently do not provide fine-grained data about where emergency response is needed, say the researchers.

IBM’s research is not the first time scientists have tried to use the sensors in computers to detect earthquakes. Seismologists at the University of California at Riverside and Stanford University created the Quake Catcher Network in 2008. The idea was to use the accelerometers in laptops to detect movement.

But wading through mounds of data from laptops to accurately point to information that might indicate seismic activity is not easy. For instance, how do you tell if the vibrations in a laptop accelerometer are the result of seismic activity and not a big-rig truck rolling by?

That’s why IBM researchers Friedlander and James Kraemer decided to focus on using rack-mounted servers.

“When you are looking at data from a rack that’s bolted to the floor, it’s not the same as what you get from a laptop,” says Kraemer. “Laptops produce too much data and it’s liable to have a lot of noise.”

Servers in data centers can help researchers get detailed information because they know the machine’s orientation, its environmental conditions are much better controlled, and the noise generated by the device tends to be predictable.

“The servers in data centers are the best place you can have these machines for our software,” says Friedlander. “We know their location, they are on 24/7,” he says. “You know what floor they are in the building, what their orientation is. In case of an earthquake, you can calculate the shape of the motion, so it tells you about the force the structure is going to be subjected to.”

To generate reliable data, the servers have to be spread across an area. And the number of computers participating can be anywhere from 100 to a few thousand.

The servers would have to run a small piece of software that the researchers say is “incredibly light.”

The hard-drive sensor data collected from a grid of servers is transmitted via high-speed networking to a data-processing center, which can help classify the events in real time.

With the data, researchers say they can tell exactly when an earthquake started, as well as how long it lasted, its intensity, frequency of motion and direction of motion.

IBM researchers hope companies with big data centers will participate in the project. “It would give them an advantage,” says Friedlander. “It would tell them about their company, their machines, and help their people.”

Over the next few months, IBM hopes to start a pilot project using its own data centers and to invite other companies to join in.

See Also:

Photo: Seismograph records a 2007 earthquake in Japan. (Macten/Flickr)


Rumor: Apple Purchased Face ID Firm ‘Polar Rose’

Your next iPhone’s interface could get more in your face with Apple’s acquisition of a face-recognition company, according to reports.

Apple has acquired Polar Rose, a Swedish augmented-reality firm, according to multiple independent news outlets. TechCrunch claims that Polar Rose sold for $22 million.

Apple did not immediately respond to a request for comment.

Wired.com earlier reported on a conceptual smartphone app co-developed by Polar Rose called Recognizr, an augmented reality application designed to identify a person just by taking a photo of them.

Demonstrated in the video below, the conceptual app Recognizr uses recognition software to create a 3-D model of a person’s mug and transmits it to a server, which matches it with an image stored in a database. An online server performs facial recognition, shoots back a name of the subject and links to his or her social networking profiles.

The acquisition of Polar Rose comes in line with a recent patent application filed by Apple related to a security feature enabling the iPhone to listen to your heartbeat or scan your face to identify its rightful owner.

See Also:


Make Clothes Out of a Can With Spray-On Fabric

<< Previous
|
Next >>


fabrican3


Photo: Caroline Prew/Imperial College London.
<< Previous
|
Next >>

(Editor’s note: A link in this story that is not safe for work is marked NSFW.)

Tight-fitting T-shirts and hipster jeans would get even snugger if you could just spray them on.

That idea just got a little less far-fetched. A liquid mixture developed by Imperial College London and a company called Fabrican lets you spray clothes directly onto your body, using aerosol technology.

After the spray dries, it creates a thin layer of fabric that can be peeled off, washed and reworn.

“When I first began this project I really wanted to make a futuristic, seamless, quick and comfortable material,” says Manel Torres, a Spanish fashion designer and academic visitor at Imperial College in a statement. Torres worked with Paul Luckham, a professor of particle technology at the Imperial College to create the material.

“In my quest to produce this kind of fabric, I ended up returning to the principles of the earliest textiles such as felt, which were also produced by taking fibers and finding a way of binding them together without having to weave or stitch them,” says Torres.

Clothes designed using the spray-on fabric will be shown at the Science in Style fashion show next week at Imperial College.

Spray-painting the body has been around for a while, and you can even get spray-on latex body paint (NSFW). And who can forget the amazing spray-on hair, a staple of Ronco infomercials in the 1980s? But these are illusions, tricks to deceive the eye. The spray-on fabric, in contrast, is lightweight and can be stored in your closet with other clothes.

The spray-on fabric consists of short fibers that are combined with polymers to bind them together and a solvent that delivers the fabric in liquid form. The solvent evaporates when the spray touches the surface.

The fabric is formed by cross-linking fibers, which cling to one another to create the garment, says Fabrican.

The spray-on fabric is pretty versatile. It can be created in many colors and and use different types of fibers ranging from natural to the synthetic, says the company.

The spray can be applied using a high-pressure spray gun or an aerosol can. The texture of the fabric changes according to the type of material — such as wool, linen or acrylic — and how the spray is layered on the body.

Fabrican says the technology is not just for fashion but can have some innovative use in medicine to layer bandages on the skin without disturbing the wound.

The technology is still in prototype stage, and some kinks still need to be worked out, such as the strong smell of solvent around the fabric. The researchers estimate that it will be at least a few years before it can be ready for commercial use.

Another challenge is to find a way to use the spray to create clothes that aren’t very snug. After all, with all the obesity in America, the sprayed-on look for clothes might not work for everyone.

Check out the video below showing how to create a spray-on scarf.

See Also:

[Via The Daily Mail]


How Context-Aware Computing Will Make Gadgets Smarter

Small always-on handheld devices equipped with low-power sensors could signal a new class of “context-aware” gadgets that are more like personal companions.

Such devices would anticipate your moods, be aware of your feelings and make suggestions based on them, says Intel.

“Context-aware computing is poised to fundamentally change how we interact with our devices,” Justin Rattner, CTO of Intel told attendees at the company’s developer conference.

“Future devices will learn about you, your day, where you are and where you are going to know what you want,” he added. “They will know your likes and dislikes.”

Context-aware computing is different from the simple sensor-based applications seen on smartphones today. For instance, consumers today go to an app like Yelp and search for restaurants nearby or by cuisine and price. A context-aware device would have a similar feature that would know what restaurants you have picked in the past, how you liked the food and then make suggestions for restaurants nearby based on those preferences. Additionally, it would be integrated into maps and other programs on the device.

Researchers have been working for more than two decades on making computers be more in tune with their users.  That means computers would sense and react to the environment around them. Done right, such devices would be so in sync with their owners that the former will feel like a natural extension of the latter.

“The most profound technology are those that disappear,” Mark Weiser, chief scientist at Xerox PARC and father of the term “ubiquitous computing” told in 1991 about context awareness in machines. “They are those that weave themselves into the fabric of everyday life.”

Making this possible on PCs has proved to be challenging, says Rattner. But the rise of smartphones and GPS-powered personal devices could change that.

“We now have the infrastructure needed to make context-aware computing possible,” says Rattner.

The next step is smarter sensors, say Intel researchers. Today, while smartphones come equipped with accelerometers and digital compasses, the data gathered from these sensors is used only for extremely basic applications.

“Accelerometers now are used to flip UI,” says Lama Nachman, a researcher at Intel. “But you can go beyond that to start sending human gait and user behavior.”

For instance, sensors attached to a TV remote control can collect data on how the remote is held by different users and build profiles based on that. Such a remote, of which Intel showed a prototype at the conference, could identify who’s holding the remote and offer recommendations for TV shows based on that.

Overall, context-aware devices will have to use a combination of “hard-sensing,” or raw physical data about a user (such as where you are), and “soft-sensing” information about the user, such as preferences and social networks, to anticipate needs and make recommendations. This creates the cognitive framework for managing context.

On the hardware side, context-aware computing will call for extremely energy-efficient sensors and devices. Devices will also have to change their behavior, says Rattner.

“We can’t let devices go to sleep and wake them up when we need them,” he says. “We will need to keep the sensory aspects on them up and running at all times and do it at minimum power.”

So far, context-aware computing hasn’t found commercial success, says Intel. But as phones get smarter and tablets become popular, the company hopes users will have a device where apps disappear and become part of the gadget’s intelligence.

See Also:

Photo: Intel CTO Justin Rattner holds up a prototype sensor that could help enable context aware computing in devices/ (Priya Ganapati/Wired.com)


Pyramids, Nanowires Show Two Futures for Artificial Skin


Video: Stanford University News Service

Making artificial limbs that can perform gross motor functions is relatively easy. Fine motor actions are harder, and wiring the limbs into the nervous system is harder still. But researchers at Berkeley and Stanford are crossing the real frontier: making artificial skin that can touch and feel.

Research teams at Berkeley and Stanford recently announced breakthroughs in producing highly touch-sensitive artificial skin. In both cases, an extremely thin layer of plastic or rubber is bonded to electronic elements arranged in micropatterns, so the skin can retain flexibility and elasticity while still transmitting a strong signal. The papers appear in an forthcoming issue of the journal Nature Materials.

At Berkeley, the team used germanium-silicon nanowires, which they compare to microscopic “hairs” on the filmy plastic skin. The Stanford team paired electrodes in a pyramid pattern, which communicate through a thin rubber film (total thickness of the artificial skin, including the rubber layer and both electrodes: less than one millimeter). They also created a flexible transistor, again to retain elasticity.

The density and sensitivity of the electrical transmitters allows the skin to detect and transmit extremely precise patterns and delicate pressure — essential for activities such as typing, handling coins, cracking an egg, loading and unloading dishes, or anything that requires a gentle touch rather than sheer mechanical force.

The sensors could also be used in nonprosthetic applications. Benjamin Tee, a Stanford graduate student, notes that an automobile’s steering wheel could be fitted with pressure-sensitive sensors that could detect whether or not a drunk or sleeping driver’s hands had slipped from the wheel.

It’s difficult to tell at this point which team’s approach might be better suited to particular applications. The Berkeley teams touts its skin’s low energy use, the Stanford team its skin’s extreme sensitivity.

There’s also a sobering link between the two projects. Both Berkeley’s and Stanford’s research were indirectly supported by the Department of Defense — Berkeley’s by Darpa, and Stanford’s by the Office of Naval Research. The past decade has seen tremendous advances in artificial limb technology, due in no small part to the number of veterans returning from Iraq or Afghanistan after losing arms or legs, or with major burns.

This in turn is partly a function of the previous decade’s advances in body armor, which have saved lives at the costs of limbs. Let’s hope that as these wars finally end, our desire to continue to improve the lives of everyone with limb differences continues.

<< Previous
|
Next >>


E-skin


An optical image of a fully fabricated e-skin device with nanowire active-matrix circuitry: Each dark square represents a single pixel.

Ali Javey and Kuniharu Takei

<< Previous
|
Next >>

Sources:

  • “Engineers make artificial skin out of nanowires,” Berkeley News
  • “Stanford researchers’ new high-sensitivity electronic skin can feel a fly’s footsteps,” Stanford Report

See Also:


Text-Free Computers Find Work for India’s Unlettered

Much to newspapers’ chagrin, these days everyone advertises and looks for work online. But how do you find work if you can’t read? Here, the new generation of touchscreen computers is light-years ahead of newsprint.

That’s the premise of Indian jobs site Babajob.com, with help from Microsoft Research’s ethnographic UI expert Indrani Medhi.

Besides the informal labor market, Medhi has also deployed and studied the use of text-free interfaces in mapping, mobile banking, and disseminating health information. Since many parts of the developing world are adopting mobile phones without books or traditional PCs, the implications of widespread text-free mobile computing applications are tremendous.

Medhi’s research is not just technological but anthropological, as the “ethnographic UI” phrase implies. Speech, for instance, is preferred over multimedia/video by her study subjects. The presence or absence of computing devices in the home has class implications. Medhi writes that her team is “also trying to understand characteristics of the cognitive styles of those with little formal education and their implications for UI design for this population.” Hindi, for instance, is like English read from left to right. It’s natural for us to arrange pictures from left to right to show chronology or causality. It’s not necessarily intuitive to a nonreader.

The demo video above of Babajobs’ text-free interface is in Hindi, without subtitles, but it’s not hard to make out what’s happening. (If you want to skip to the site in action, go to 2:50.) A middle-class couple is looking for domestic help. Meanwhile, one woman convinces another (who can’t read) that she can use a computer to find work. At the end, they find each other. Such a simple, happy story is easy to understand without letters or language.

See Also:


Your Lost Gadgets Will Find Each Other

Graphic by Christine Daniloff, via MIT News Office

Sometimes when one of my remotes is missing, I interrogate the others: “Where’s your friend? I know you know something!” In the future, with wireless positioning systems, a version of that method might actually almost work.

Researchers at MIT’s Wireless Communications and Network Sciences Group think networks of devices that communicate their positions to each other will work better than all of the devices transmitting to a single receiver. The latter is how GPS works, and if you’ve used it, you know it isn’t always very precise. In the lab, MIT’s robots can spot a wireless transmitter within a millimeter.

This seems almost intuitive: the more “eyes” you have on an object, the easier it is to triangulate — the robot version of “the wisdom of crowds.” But the key conceptual breakthrough here isn’t actually the number of transmitters or their network arrangement, but what they’re transmitting. MIT News’s Larry Hardesty writes:

Among [the research group’s] insights is that networks of wireless devices can improve the precision of their location estimates if they share information about their imprecision. Traditionally, a device broadcasting information about its location would simply offer up its best guess. But if, instead, it sent a probability distribution — a range of possible positions and their likelihood — the entire network would perform better as a whole. The problem is that sending the probability distribution requires more power and causes more interference than simply sending a guess, so it degrades the network’s performance. [The] group is currently working to understand the trade-off between broadcasting full-blown distributions and broadcasting sparser information about distributions.

Much of this research is still theoretical, or has only been deployed in lab settings. But Princeton’s H. Vincent Poor is optimistic about the MIT group’s approach: “I don’t see any major obstacles for transferring their basic research to practical applications. In fact, their research was motivated by the real-world need for high-accuracy location-awareness.” Like precisely which cushion my remote control is underneath.

Warning: Very Dry Flash Video Of Robots Finding Things Follows

See Also:


ARM’s New Chip Leaves Everyone Else in the Dust, Again

ARM Cortex A-15 MPCore image via ARM

Almost all high-profile mobile devices use a version of ARM’s microprocessor. Samsung, Texas Instruments, and Qualcomm compete to get their chips on different devices, and Apple now makes their own, but all of them license their tech from ARM. Now ARM has announced their next-generation Cortex chip, the A-15, and it’s a doozy.

The new chip was announced at a press conference last night in San Francisco. Eric Schorn, ARM’s vice president of marketing, said, “Today is the biggest thing that has happened to ARM, period.” The chips, which will support up to four processing cores, should appear in consumer devices sometime in 2012.

The big breakthrough for the Cortex A-15 is virtualization. For instance, Samsung’s new Orion chip, which is based on ARM’s Cortex A-9, can send different video images to multiple screens. The A-15 can actually support different operating systems or virtual appliances on those screens. So when VMWare Fusion finally hits your iPad, it might really have something to work with.

Hardware virtualization has traditionally been the hallmark of chips designed to power servers, which frequently have to support different environments; with this chip, ARM is bringing a little bit of the server’s versatility to the smartphone, and (it hopes), some of the power-conserving elements of smartphone chips to servers.

Finally, there’s the markets everywhere in between: tablets, laptops, and home media servers, among others. Om Malik calls the A-15 “a tiny chip with superpowers.” That might not be far off.

See Also: