We can handle the imaginary terror of UFOs and nightmarish, flying mammals. But, robots that can jump like a human and then glide like a colugo? Now you’re just filling Mr. Spielberg with even more sci-fi, end of days fodder. Carnegie Mellon researchers Matthew Woodward and Metin Sitti have crafted a prototype jumping and gliding bot at the university’s NanoRobotics Lab that springs into action using a pair of human knee-like joints. The automated hi-jinks don’t end there either, as the duo’s invention then spreads its legs to catch some air and glide on back to terra firma. The project isn’t just some bit of engineering whimsy; the team plans to adapt this tech for use in “unstructured terrain” — i.e. non-level, wargadget territory. For now, this lord of the leaping gliders can reach comfortable human-sized heights of up to six feet. Give it some time, however, and we’re sure this lil’ android’ll give Superman a bound for his money. Click on past the break for a real world demo.
Does the Uncanny Valley extend to re-creations of our four-legged friends? We’ll find out soon enough if Yasunori Yamada and his University of Tokyo engineering team manage to get their PIGORASS quadruped bot beyond its first unsteady hops, and into a full-on gallop. Developed as a means of analyzing animals’ musculoskeletal system for use in biologically-inspired robots, the team’s cyborg critter gets its locomotion on via a combo of CPU-controlled pressure sensors and potentiometers. It may move like a bunny (for now), but each limb’s been designed to function independently in an attempt to simulate a simplified neural system. Given a bit more time and tweaking (not to mention a fine, faux fur coating), we’re pretty sure this wee bitty beastie’ll scamper its way into the homes of tomorrow. Check out the lil’ fella in the video after the break.
Google’s Android platform is shooting for the moon.
NASA sent two Android-powered Nexus S smartphones into space with the last manned space shuttle, Atlantis, on the STS-135 mission. The duo of smartphones were used to test and investigate how humans and robots can coexist in space more efficiently.
In the mission, the phones were used to control SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites), small robotic satellites that were originally developed at the Massachusetts Institute of Technology. The SPHERES are used to do things like record video and capture sensor data, errands that once required astronauts. The phones are used to help control the SPHERES, which have their own power, computing, propulsion and navigation systems. The robotic devices have built-in expansion ports that allow a variety of additional sensors and devices, like cameras, to be attached.
Another group of researchers from Great Britain hope to send a smartphone powered satellite into lower Earth-orbit before the year’s end. This experiment differs from NASA, however, in that it’s primarily testing how well the guts of the smartphone can stand in the extreme conditions of space. And last year, a pair of Nexus Ones were sent 30,000 feet into the air as the payload of a small rocket. One was destroyed when its parachute failed, but the other safely glided to Earth, capturing two and a half hours of video footage.
Why Android over iOS, or another smartform platform? NASA thought an Android device would be a good fit since it’s open source. Google’s engineers even wrote a sensor logging app that NASA ended up using on the mission (and it can be downloaded from the Android Market, if you’re interested).
Check out the video below to see the Nexus S and the SPHERES in action.
Barobo’s iMobot modular robotics system just launched earlier this year, and the folks behind it just started shipping the first kits last month. Turns out, they were on hand here at NEXT Aarhus with a bona fide demo, and we couldn’t resist a quick demo. The actual kit is being hawked primarily to universities — we’re told that each $2,000 robot can be programmed to do just about anything, and if you stock up on a couple, you can produce full-on humanoids, a camera-toting rescue snake or something else that’ll undoubtedly take over the world in just a few centuries. Each robotic piece is equipped with WiFi and Bluetooth, and aside from mounting points used for connecting family members, there’s a couple of sensor ports that allow for rangefinders and proximity modules to be stacked on as well.
The real show, however, happened when Elmo’s long-lost cousin made an appearance. We’re told that the creature is strictly a prototype using miniaturized versions of the robotic pieces that are on sale now. Those minis aren’t up for order per se, but cutting the right check might land you with more than a smile. As you’ll see in the video past the break, the software program written for the bear allowed it to “learn” movements that were dictated by the human holding it, and once the latest cheer was burned into its cotton-filled brain, a simple button press played things out in fantastical fashion. Have a look. You won’t be disappointed.
MABEL the running robot has been training hard, grabbing the title of “fastest bipedal robot with knees.” Like any great sports star, it’s been plagued by many dream-crushing obstacles and injuries, but this time it’s done it: running at a speed of 6.8 miles per hour on a track. Jessy Grizzle, professor at the University of Michigan’s Department of Electrical Engineering and Computer Science, attributes this bot’s success to its human-like weight distribution — a heavier torso and flexible legs with springs similar to tendons for movement “like a real runner.” This bipedal technology, which can mimic a human’s ability to run and climb over obstacles, may be used to help the disabled walk again, in rescue situations or as the basis of future vehicles that don’t require roads or wheels to drive. If MABEL doesn’t make the SWAT team this year, it can most certainly snag a spot as an extra in the next Transformers movie. Check out the PR and video of this modern day robo-Flo-Jo after the break.
If you fancy yourself a Stanford (wo)man, but lack the requisite dollars to actually attend, now’s your chance to collect those collegiate bragging rights. Starting October 10th, you can join Professor Sebastian Thrun and Google’s Director of Research, Peter Norvig, in a free, online version of the school’s Introduction to Artificial Intelligence course. The class covers, “knowledge representation, inference, machine learning, planning and game playing, information retrieval, and computer vision and robotics,” and ambitiously aims to be the largest online AI course ever taught. If you’re feeling the ole red and white, you can register at the source link below, but if you’re looking for the official Stanford stamp of approval, we’re afraid you’re barking up the wrong tree — non-students will receive a certificate of completion from the instructors only. Still interested? Check out the video introduction after the break and hit the source for more details.
Foxconn chairman Terry Gou addresses journalists at a product-testing facility during a media tour of the factory. Photo courtesy of Thomas Lee
Foxconn has a reputation as maker of our much-beloved iDevices. It also has a reputation for inhumane living and working conditions for employees in its Shenzhen-based plants.
One million robots, in fact, hopefully all in place within the next three years. The robots will be tasked with mundane tasks such as welding, spraying and assembling, which humans currently do. Foxconn currently uses 10,000 robots to supplement its 1.2 million human workers in its production process.
Foxconn CEO Terry Gou said in a statement Friday that he wanted to shift the company’s employees “higher up the value chain, beyond basic manufacturing work.” This would enable the Shenzhen factory to improve its overall working conditions, and create increasingly sophisticated products, he said. IDG News was first to report the news.
The worker conditions in China’s Foxconn industrial compound have come under scrutiny in the past few years, since the suicide deaths of 17 workers, and other suicide attempts. Workers have described conditions to be much like working in a “prison” or a “cage.”
Foxconn’s horror stories are symptomatic of a larger problem in China’s components industry, where factory employees reportedly endure harsh working conditions comparable to a sweatshop. Hourly wages of less than a dollar, illegal overtime hours and firings without notice are common among most gadget factories, according to a six-month investigation by GlobalPost.
Workers, whose overtime hours (according to Chinese labor laws) should not exceed 36 hours per month, averaged between 50 and 80 hours each month. Besides grueling hours, if workers made a mistake, they were often humiliated rather than simply being reprimanded. Foxconn is not the only factory whose workers endure such conditions, but due to its connection with Apple, it is probably the most notable. The company says it now has a 24-hour hotline in place, nets surrounding many buildings and a new policy that allows only a 60-hour maximum work week.
Manufacturing robots and humans typically do not work side-by-side in industrial facilities due to the possibility of injury or death to human workers. Current manufacturing robots are unable to sense the whereabouts of humans wandering nearby, but researchers are working to fix that problem.
Will increasing the number of robots in Foxconn’s factories (by a factor of 100) help solve the company’s worker woes?
If the company does in fact shift workers from assembly line manufacturing positions to higher level roles, perhaps workers would be happier — as long as those roles involved increased responsibility and a more varied daily schedule. But would those workers be skilled enough for more advanced positions? Will the company actually spend time and capital training workers in these new or different roles?
It would certainly be easier for Foxconn to just lay the affected workers off: Then money is saved, any overcrowding-related issues are resolved, and working conditions could theoretically improve for the remaining workers. Historically, robots tend to just replace human workers in factory settings rather than complement their duties. They are more efficient than their human counterparts, and don’t require costly things like food, lodgings, or even a paycheck (maybe just some routine maintenance and a bit of supervision).
Hopefully Foxconn can find a solution that doesn’t involve laying off thousands, or hundreds of thousands, of its workers.
Some robots like to help around the house, others fulfill your Pixar fantasies, but this one’s just training to boogie. Part of a summer long research project, DARwin-OP is taking a master class in Dance Dance Revolution from its amateur roboticist Geppetto. Perched atop a homestyle-DDR pad, the batman-like doppleganger bot does more of a slow shuffle step than full-on Running Man thanks to a slight bout of vertigo — hence the balance bar. Once that minor kink gets straightened out, expect to see this dancefloor maniac add visual input to its repertoire — letting televised arrows be its coordinated dance-off guide. Jump past the break for a video demo of the open platform automaton in action.
What do you get when you cross a dj with a “Canadian roboticist?” An almost true-to-fiction Wall-E, that’s what. In this rendition of garbage-bot gone cute, amateur robotics enthusiast DJ Sures (yes, he makes music) hollowed out a U-Command Wall-E toy and fixed him up with some servo guts. The voice-activated, semi-autonomous modjob has a built-in eye camera that recognizes motion, colors and faces, coming the closest we’ve seen to replicating the CG-romantic. The whole AA-battery powered affair runs on the EZ-B Robot Controller software shown off by Sures in the video below. And unlike other past re-creations, this little guy knows how to get down without the need for sped up video tricks. Clearly, the Pixar-bred bot’s become the unofficial icon of the homebrew robotics community, so where’s his official counterpart? You listening Disney? Get cracking.
What better way to combine your nerdy loves of computer programming and Star Wars than with a robot that can actually battle with a lightsaber?
This is “JediBot,” a Microsoft Kinect–controlled robot that can wield a foam sword (lightsaber, if you will) and duel a human combatant for command of the empire. Or something like that.
“We’ve all seen the Star Wars movies; they’re a lot of fun, and the sword fights are one of the most entertaining parts of it. So it seemed like it’d be cool to actually sword fight like that against a computerized opponent, like a Star Wars video game,” graduate student Ken Oslund says in the video above.
The world of dynamic robotics and AI has been immensely aided by the affordable, hackable Microsoft Kinect. The Kinect includes multiple camera and infrared light sensors, which makes recognizing, analyzing and interacting with a three-dimensional moving object — namely, a human — much simpler than in the past. Microsoft recently released the SDK for the Kinect, so we should be seeing increasingly useful and creative applications of the device. The KUKA robotic arm in the video above is traditionally used in assembly line manufacturing, but you may remember it from a Microsoft HALO: Reach light sculpture video last year.
According to the course overview (.pdf) for the “Experimental Robotics” course, the purpose of the laboratory-based class is “to provide hands-on experience with robotic manipulation.” Although the other groups in the class used a PUMA 560 industrial manipulator, the JediBot design team, composed of four graduate students including Tim Jenkins and Ken Oslund, got to use a more recently developed KUKA robotic arm. This final project for the course, which they got to choose themselves, was completed in a mere three weeks.
“The class is really open-ended,” Jenkins said. “The professor likes to have dynamic projects that involve action.”
The group knew they wanted to do something with computer vision so a person could interact with their robot. Due to the resources available, the group decided to use a Microsoft Kinect for that task over a camera. The Kinect was used to detect the position of JediBot’s opponent’s green sword-saber.
The robot strikes using a set of predefined attack motions. When it detects a hit, when its foam lightsaber comes in contact with its opponent’s foam lightsaber and puts torque on the robotic arm’s joints, it recoils and moves on to the next motion. It switches from move to move every one or two seconds.
“The defense mechanics were the most challenging, but people ended up enjoying the attack mode most. It was actually kind of a gimmick and only took a few hours to code up,” Jenkins said.
The project utilized a secret weapon not apparent in the video: a special set of C/C++ libraries developed by Stanford visiting entrepreneur and researcher Torsten Kroeger. Normally, the robot would need to plot out the entire trajectory of its motions from start to finish — preplanned motion. Kroeger’s Reflexxes Motion Libraries enable you to make the robot react to events, like collisions and new data from the Kinect, by simply updating the target position and velocity, with the libraries computing a new trajectory on-the-fly in less than a single millisecond.
This allows JediBot to respond to sensor events in real time, and that’s really the key to making robots more interactive.
Imagine a waiterbot with the reflexes to catch a falling drink before it hits the ground, or a karate robot you can spar against for practice before a big tournament.
I doubt anyone would be buying their own KUKA robotic arm and creating a sword-playing robot like JediBot in their home, but innovations like this using interactive controllers, and the availability of the Reflexxes Motion Libraries in particular for real-time physical responses, could help us see robots that better interact with us in daily life.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.