Sure, it looks just about like every other Arduino board found at Maker Faire, but this one’s special. How so? It’s Google-branded, and not only that, but Google-endorsed. Shortly after the search giant introduced its Android Open Accessory standard and ADK reference hardware, a smattering of companies were already demonstrating wares created around it. Remote-control robots? Check. Nexus S-controlled gardens? Check. A laughably large Labyrinth? Double check. It’s already clear that the sky’s the limit with this thing, and we’re as eager as anyone to see ’em start floating out to more developers. Have a look in the gallery for close-ups of the guts, and peek past the break for a video of the aforementioned Xoom-dictated Labyrinth.
Although we usually prefer our computers to be perfect, logical, and psychologically fit, sometimes there’s more to be learned from a schizophrenic one. A University of Texas experiment has doomed a computer with dementia praecox, saddling the silicon soul with symptoms that normally only afflict humans. By telling the machine’s neural network to treat everything it learned as extremely important, the team hopes to aid clinical research in understanding the schizophrenic brain — following a popular theory that suggests afflicted patients lose the ability to forget or ignore frivolous information, causing them to make illogical connections and paranoid jumps in reason. Sure enough, the machine lost it, and started spinning wild, delusional stories, eventually claiming responsibility for a terrorist attack. Yikes. We aren’t hastening the robot apocalypse if we’re programming machines to go mad intentionally, right?
The Pneuborn-7II is what a 7-month old infant would look like if it were a robot.
Sometimes I don’t know what’s worse: robots with a face, or robots without one.
Especially when it’s crawling towards you crying “ma-ma!” as it rises up to standing height on its spindly metallic legs.
Researchers at Japan’s Hosoda Laboratory developed Pneuborn-7ll and Pneuborn-13, a pair of musculoskeletal infant robots. Their names come from the fact that they use pneumatic muscles as actuators (and in case you were wondering, they don’t actually say “ma-ma!”, as far as I know).
Pneuborn-7ll is the size of a 7-month old child, weighing in at 11.9 lbs and measuring 31 inches tall. It was developed to “study the relationship between motor development and embodiment.” Pneuborn-7ll is completely autonomous, and has 19 pneumatic muscles, including a spine with three pitch and yaw joints. An algorithm based on central pattern generators (CPGs) was optimized for the robot to crawl without actual artificial intelligence or advanced sensors.
Pneuborn-13 is Pneuborn-7ll’s 13-month old older brother, designed so researchers can study the effect of bipedal walking on the musculoskeletal structure. It’s 29.5 inches tall and weighs a scant 8.5 lbs. Pneuborn-13 is also autonomous, but has only 18 pneumatic muscles, primarily concentrated around the ankle, knee and hip joints. It lacks a spinal column, but can still manage to get into a standing position and perform walking motions.
Unfortunately, videos haven’t been posted of the duo in action yet, but we can expect them to be uploaded sometime soon. In the meantime, you can use your imagination to picture how these Pneuborns move.
A masters student at Georgia Tech University has created a system that allows a group of robots to move into formations without communicating with the other robots it is forming shapes with. The robots have no predefined memory or no prior knowledge of their location.
In the video above, the 15 Khepera robots make independent decisions based on the same information and by trial and error move their way into a formation that has been assigned to them. They move to spell out the word “GRITS”, standing for Georgia Robotics and Intelligent Systems.
Ted MacDonald, who devised the multi-robot system, explained to Wired.co.uk that most other systems tend to split robot formation into two parts. The first is figuring out where each robot should go and the second is actually making the formation happen. He wanted to find a way to do both of these things at the same time, i.e. be moving and working out where to go simultaneously.
He told Wired.co.uk: “Imagine if a group of 10 people were asked to form the shape of a box, but weren’t allowed to speak. People would look around to see if there were any lines forming and try and find holes for them to move into.”
His robotic formation used a 3D motion-tracking camera above the robots, which could assess the position of each of the robots and then broadcast that information to all of the robots over Wi-Fi — so they each had access to the same information. They then use an Iterative Closest Point (ICP) algorithm which assesses the difference between where they are positioned currently and the formation they need to adopt.
The robots can be placed completely randomly to start with. Each robot then makes an initial guess as to where it thinks the formation might be (perhaps noticing an early pattern emerging that resembles part of the letter). All of the robots follow the same rules and eventually converge to a solution.
The robots can be made to spell out any words in real time by following the same rules. Macdonald is now working on a variation that sees a leader robot (which does not follow the algorithm) be automatically assigned a role in the formation. This means that a single robot could be controlled remotely (by a human) to lead a phalanx of other robots in a formation, which could change en route.
Clearly in real-world situations you do not always have a motion tacking camera, but the same effect could be achieved by the robots having GPS devices relaying their positions to the others.
Macdonald is also keen to find out what happens if the robots cannot see every other robot in the group. So far it seems that they can still get into formation if they start off close to the right formation, but so far he hasn’t been able to mathematically prove when it works and when it doesn’t.
Potential applications for the technology include moving a group of robotic vehicles from point A to point B without experiencing congestion problems (which could have uses for the military). Because of the sophisticated 3D motion tracking, the same system could be used to move aerial robots into 3D formations.
In this week’s Gadget Lab podcast, the crew toys around with phones, knives and robots.
Senior editor Dylan Tweney gives us shares his experience visiting SRI, where he saw an awesome wall-climbing robot. Watch out, Spider-Man.
Then we recap the latest iOS software update for the iPhone, iPad and iPod Touch. It addresses the “bugs” that made the devices store too much location data.
Staff writer Mike Isaac joins the show to talk about LG’s latest dual-core smartphone offering, the mighty G2X. Verdict: It’s a powerhouse.
We wrap up the podcast with the Bear Grylls knife and machete set. They’re sharp enough to chop vegetables and bears.
We’ve been told time and timeagain to fear our mechanical friends, so imagine our relief when we heard that some Swiss scientists had a batch of bots that displayed altruism. What’s more, these little two-wheeled foragers weren’t programmed to share, they evolved the trait. Researchers at EPFL infused Alice microbots with digital “genes” that mutated over time as well as color sensors that allow them to navigate their environment. The robots were tasked with collecting “food” and given the option to keep it for themselves or split it amongst their silicon-brained relatives. The more they decided to give to others with similar genetic makeup the more those virtual genes were passed on to future generations — including the one for altruism. The experiment is an example of Hamilton’s Rule, an evolutionary model for how the seemingly counter-intuitive trait of selflessness could arise through natural selection. Don’t let your guard down just yet, though — the robots are only sharing with each other for now.
I loved that little OCD cleaner bot MO in WALL-E. I also love dogs. But somehow, combining the two just ends up…weird.
I’m talking about the “Puppy Robotic Vacuum Cleaner,” whose name is actually a bit misleading. Puppy Robotic is composed of several parts: A “mother dog” base — complete with “docking tits” (seriously!) — and four mini-Roomba style “puppy” cleaners with “docking mouths” (how else would they suckle power from their mother unit?).
Each puppy has a rolling brush and suction hole for cleaning, and a display screen that relays whether it’s in cleaning mode (a smiley face), entertainment mode (a music note), or feeding mode (what appears to be a nipple). The antenna is appropriately implemented as a tail.
From the images, it also looks like you can hook a remote control on your actual dog’s collar so that the cleaner pups trail it, sweeping up the animal’s muddy footprints as it trots along.
There’s a lot of animal-inspired robotics out there: robot smartbirds, baby robo dinos and sneaky robot snakes, to name a few. The Puppy Robotic Vacuum Cleaner appears to be the first to utilize a “docking tit” though, as far as I’m aware.
The Puppy Robotic Vacuum Cleaner is thankfully only a concept.
A wall-climbing robot at SRI International sticks to vertical surfaces using electroadhesive film. Photo: Dylan Tweney/Wired.com
MENLO PARK, California — Scientists at SRI International have figured out how to make a plastic film that can stick to walls when you apply a small electric current — then peel off effortlessly when you turn the current off.
Why? They’re not entirely sure yet, but it’s pretty cool technology.
A recent SRI project aims to use the film to stick extension ladders to walls, so they don’t fall over when you’re climbing up them.
I saw a demonstration of the technology recently at SRI’s labs here, not far from Stanford University, which spawned the think tank in 1946 and spun it off as an independent nonprofit in 1970. The organization has been home to an impressive range of breakthroughs, from Douglas Engelbart’s pioneering work on mouse-driven graphical user interfaces to surgical robots, and has spawned a number of commercially successful spinoffs.
The key to on-demand stickiness is a special polymer film with a very low power (but high voltage) circuit printed on it. Applying 7,500 volts at 50-100 microamperes of current makes the polymer sticky enough to support small loads. Turn the current off, and the stickiness, called electroadhesion, dissipates within a few seconds.
Wrap that film around a couple of rollers, tank tread-style, and you’ve got a wall-climbing robot.
The robot shown in the video below has a footprint of about 1.5 by 2 feet, which gives it enough stickiness to lift itself (the robot weighs about 4 pounds) plus a 4-pound payload. It’s controlled by wireless signals from a game controller, though the controls are pretty limited: It can go forward (up) or backward (down). And it’s quite sticky, even on uneven surfaces like a painted cinder-block wall.
The SRI scientist who developed this technology in 2008, Harsha Prahlad, sees it as potentially useful for wall-climbing surveillance robots, or robots that can climb buildings, bridges, or other structures to inspect them for damage in places that humans can’t easily reach.
Other applications include pick-and-place systems in warehouses: A robot arm with an electroadhesive “pad” could use it to pick up objects, then set them on conveyor belts or in boxes.
It would also make a slick wall-hanging system for photo frames or even tablets like the iPad: Turn on the adhesive pad, stick it to the wall and walk away, with no wall-disfiguring nails or screws required. With a small solar panel, you can get enough energy from ambient light to power the electroadhesive film all day, Prahlad says.
See below for a video of the wall-climbing robot in action.
Note: The two “tails” sticking down from the bottom of the robot are there to keep the robot from peeling off the wall. By giving the robot an angular “brace” it increases the horizontal component of the force, which the electroadhesive film is better able to resist. Prahlad says that geckos’ tails function in a similar way: “If you cut off the tail from a gecko it can no longer climb.”
With an 80 percent success rate, there’s a pretty good chance that Justin here is better at playing catch than you are. This old German Aerospace Agency-designed robot, which we first saw in 2009, learned a new trick — he can track thrown objects as they approach, calculate their flight path, and snap his cold, soulless hands around them before they hit the ground. Better yet, he can catch two objects at the same time. For his encore, Rollin’ Justin uses his tactile finger sensors to prepare you a cup of coffee, just so you know there’s no hard feelings once’s he’s done schooling you at three flies up. The ‘bot can be controlled via iPad and acts totally grateful when you get him a tie for Christmas, even though it’s not what he really wanted. Video after the break.
Update: Johannes sent us another video of him catching two balls with one hand! It’s after the break.
There are really no words to describe the this photo, except to point out the obvious: It has a robot. And a bike. And a lady in white tights with really big hair.
In other words, this photo sums up all that is awesome and good and wonderful and yes, a bit juvenile about what we write about here on this blog. If there was ever an official photo of Gadget Lab, this would be it.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.