Why the Terminator Uprising (Probably) Won’t Ever Happen

When I interviewed Wired for War author PW Singer last March, he told me that the preconditions for a successful Terminator-type uprising are not in place. As computer development accelerates, however, those preconditions become way more possible.

So, what are the preconditions, according to Singer?

1. The AI or robot has to have some sense of self-preservation and ambition, to want power or fear the loss of power.

2. The robots have to have eliminated any dependence on humans.

3. Humans have to have omitted failsafe controls, so there’s no ability to turn robots or AI off.

4. The robots need to gain these advantages in a way that takes humans by surprise.

At the moment, says Singer, these conditions do not exist. “In the Terminator movies, Skynet gets super intelligence, figures the humans are going to eventually shut it down, thinks, ‘I better strike first.'” However, in today’s army, “we’re building robots specifically to go off and get killed.” He adds, “No one is building them to have a survival instinct—they’re actually building them to have the exact opposite.”

As far as human dependence, robots may do more and more human dirty work, but robots still need the meatbags to handle their dirty laundry. “The Global Hawk drone may be able to take off on its own, fly on its own, but it still needs someone to put that gasoline in there.” Still, it’s not hard to see how this precondition could eventually be overcome.

The failsafe discussion is surprisingly two sided. “It seems rather odd that people who grew up watching Terminator in the movie theaters wouldn’t think, ‘Hmm, maybe we should have a turn-off switch on there.'” But on the other hand, “brilliant AI could just figure a way around it.” Besides, “we don’t want to make the failsafe all that easy, because we don’t want a robot that comes up to Bin Laden that he can just shut off by reaching around the back and hitting the switch.”

We of course assume that robots will never gain the element of surprise. “You don’t get super-intelligent robots without first having semi-super-intelligent robots, and so on. At each one of these stages, someone would push back.” The scary thing is, Singer does acknowledge that the exponential growth of super-smart machines may indeed catch us by surprise eventually. “By the end it’s happening too quickly for people to see.”

No matter what preconditions are prevented deliberately, there is a point on every futurist’s timeline where computers become “smarter” than humans, in terms of sheer brain capability, and no matter what happens up till that point, the game then changes completely. “In the Terminator movies, Skynet both tricks and coerces people into doing its bidding.” How do we stop that from happening?

“Some people say, ‘Let’s just not work on these systems. If they’re so many things coming out of this that are potentially dangerous, why don’t we just stop?'” says Singer. “We could do that, as long as we also stop war, capitalism and the human instinct for science and invention.” [More from my interview with PW Singer]

Machines Behaving Deadly: A week exploring the sometimes difficult relationship between man and technology.

Philosopher ponders the implications of robot warfare, life with a degree in philosophy

H+, our favorite transhumanist magazine, has just published a chat with Peter Asaro, the author of a paper titled “How Just Could a Robot War Be?” In this interview (co-authored by our old friend R.U. Sirius) the gentleman from Rutgers explores the philosophical implications of things like robot civil war, robots and just war theory, and the possibilities of installing some sort of “moral agency” in the killer machines that our military increasingly relies on. But that ain’t all — the big thinkers also discuss the benefits of programming automatons to disobey (certain) orders, drop science on a certain Immanuel Kant, and more. We know you’ve been dying to explore the categorical imperative as it relates to the robot apocalypse — so hit that read link to get the party started!

Filed under:

Philosopher ponders the implications of robot warfare, life with a degree in philosophy originally appeared on Engadget on Thu, 21 May 2009 11:07:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Hack: Nintendo DS Controls Open Source Robot

Surveyor makes open source robot controllers that have quite a fan following among do-it-yourself drone enthusiasts. The company’s core product is the SRV-1, a  programmable mobile robot controller that is open source,wireless and video enabled.

Peter Dove, UK-based developer, has created a nifty way to control the Surveyor SRV-1 module’s camera and the base using a Nintendo DS game console. Dove used the SRV-1 console’s built-in WiFi link to remotely control the motorized robot base and view pictures and video transmitted by the robot. And as the video shows the buttons on the DS’s console work perfectly for the purpose.

The attempt is similar to how some enthusiasts had used the Google Android G1 phone to control a robotic blimp.

For detailed binaries and source code on how Dove brings together the DS and the Surveyor moduel, check out his blog.


Five Reasons Why Humanoid Robots Will Someday Fight Our Wars

Robots are officially on the battlefield—UAVs like the Predator and Reaper patrol the skies while militarized bomb-disposal robots like the Talon detonate explosives on the ground. But where are the humanoids? Roboticist and author Daniel H. Wilson makes the case for a humanoid robot army.

A humanoid robot is a general-purpose robot that looks a lot like a person, complete with a head, torso, arms and legs. The “total package” humanoid can walk bipedally, like a person, and use its hands to dexterously manipulate objects in the world.

Current prototypes like the Honda ASIMO can deliver tea and politely shake hands with their human masters, but based on some great sci-fi movies, humanoid robots are supposed to be terrors on the battlefield—walking titanium endoskeletons crunching over human skulls and mowing down pesky humans with massive handheld Gatling guns.

Will we ever really see a humanoid robot army? I think so, and here are my top five reasons why.

1. There is a one-to-one mapping between the human and the humanoid body.
Robots aren’t yet smart enough to play without supervision. That’s why human soldiers control unmanned aerial vehicles from thousands of miles away by twiddling joysticks. It isn’t easy, but flying a plane through empty space is child’s play compared to maneuvering a ground-based robot through rubble and wreckage. And what if you need to do something more complicated than just stepping over a curb, like defusing a bomb?

It’s called telepresence. With telepresence, a person feels as though they are the robot by controlling the robot’s body and seeing through its eyes. Human-shaped robots are infinitely easier to manipulate because there is a one-to-one mapping between man and machine. Instead of shoving around a non-intuitive joystick, slide your hands into gloves that map your fingers to robot fingers thousands of miles away. Now put your human expertise to work, without putting your human butt in danger.

2. Humanoid robots take advantage of human environments and equipment.
Nothing beats a tank for crossing the desert, but what about crossing a living room? Every human city is designed for a very specific type of animal: homo sapiens. We humans come in a very specific range of sizes and weights, and our environments tend to have specific temperature, vibration and noise limits—all of which simplify the problem of designing a robot. Humanoids are naturally suited to navigating environments designed for humans; they can walk through doorways, climb steps, and see over counters and furniture.

Along with our cities, most military supplies are designed for use by humans. That means a humanoid robot can wear human body armor, boots and camouflage. In addition, it can fire standard-issue weapons and ammunition, removing a need for specially-designed weaponry. Humanoids could also potentially pilot human vehicles. Rather than creating an autonomous vehicle from scratch, just put a humanoid robot in the driver’s seat of a standard vehicle. And when a robot squad is on the go and under fire, it always helps to be able to scavenge enemy weapons and improvise. The infrastructure is there, and humanoid robots exploit it.

3. Humanoid robots are easier to train.
War is largely improvised, and that means learning new tricks on the fly. So, how do you teach a robot comrade how to defuse a new type of coffee-can landmine? Without a degree in engineering, you probably don’t. But given a humanoid robot, intuitive training approaches are available to regular soldiers. An easy but tedious method is to physically push the robot’s limbs through the proper series of movements. Alternately, take direct control through teleoperation and then perform the activity yourself. The robot then just needs to remember how you did it.

Ideally, however, a robot can be trained just like a person—by watching. Robots who learn by demonstration can be quickly trained by ordinary people who do not speak robot-ese or do any programming. That’s because it’s how we learn from each other. The trainer simply performs the task (e.g., a flying scissor kick) and the robot watches and intuits how to do it. Humanoids are much better at learning by demonstration, thanks to that one-to-one mapping between its body and yours.

4. Teamwork is easier between humans and humanoids.
It is doubtful that robot armies will operate completely autonomously in the near future. Human-robot teams will likely be the norm, as they are today. Therefore, it’s important to make sure that human and robot allies can work together without stepping on each others’ toes. And that means they’ve got to have good communication.

Human combat teams communicate and cooperate using language and gestures, and by paying attention to each other’s facial expressions and emotions. Robot warriors that recognize human body language will be able to make fast decisions in loud, hazardous environments. Perhaps even more important, a human soldier should be able to understand what a robot is thinking naturally, by reading its body language instead of looking up an error code in an instruction manual. Using the highly familiar human form-factor creates a natural communication channel that allows humanoids to cooperate with humans in chaotic environments where split-second decisions are the norm.

5. The locals could potentially interact with humanoid robots.
War is becoming less about conventional fighting on a mass scale and more about cultural awareness. Last month, President Obama unveiled plans to send hundreds of “social scientists” along with soldiers to Iraq, to counsel the military on local customs. Relative to the faceless robots currently in use, a humanoid robot provides the opportunity for some kind of natural human interaction with non-combatants. Instead of an impersonal unmanned ground vehicle wrecking through walls or an unmanned aerial vehicle dropping bombs from afar, humanoid robots (armed or unarmed) could patrol areas wearing local garb, speaking the local language, and obeying local customs. How P.C.—or just freaky—is that?

On the other hand, humanoid robots can be horribly terrifying.
Mind games are a part of every battle. During World War II, aviators painted snarling teeth on the noses of their fighter planes. Nowadays (and back then), bombs have funny messages written on them, like “Boom shacka lacka,” and “You want fries with that?”

Now imagine the enemy reaction on Robot D-Day, when thousands of super-powered humanoid robots march out of the crashing surf, bullets plinking harmlessly from their razor-sharp gilded breast-plates as death metal blares from their metal mouth speaker grilles.

Terrified yet? Well calm down, sissy; humanoid robots aren’t on the battlefield, yet. But they might be soon, thanks to their natural ability to communicate and cooperate with humans, the ease with which they can operate in our environments and use our tools, and the terrible fear that blossoms in the heart of man upon laying eyes on the great and horrifying visage of the humanoid robot war machine.

Machines Behaving Deadly: A week exploring the sometimes difficult relationship between man and technology. Guest writer Daniel H. Wilson earned a PhD in Robotics from Carnegie Mellon University. He is the author of How to Survive a Robot Uprising and its sequel How To Build a Robot Army. To learn more about him, visit www.danielhwilson.com.

Giz Explains: How Electrocution Really Kills You

Humans are fragile. Our bodies are easily mutilated by our own creations: Crushed, mulched, zipped. But physical force is weak and inefficient compared to good old electrocution, which, according to MythBusters’ Adam Savage, doesn’t kill you the way you think it does.

If you learned about how electrocution kills you from cartoons or Ernest P. Worrell—you get fried as your body flashes like fireworks and everybody can see your bones—well, you got learned wrong. Electricity doesn’t actually fry you—that actually requires way more juice than it takes to kill you, which is a frighteningly minuscule amount.

But before we get to the scary part, let’s get through the technical part, so we’re on all the same page of scariness. You’ve got a few major units when it comes to electricity: Volts relate voltage, amperes (amps) describe the current, watts measure power and ohms refer to resistance. A pretty good analogy from HowStuffWorks relates the basic differences between them, plumbing style: Voltage is like water pressure, current (amps) is like the flow rate, and resistance (ohms) is like the size of the pipe. Increasing the voltage results in a greater current (more amps)—assuming a constant resistance-since increasing the pressure logically increases flow [Update: Clarified this sentence]. Power (wattage) is simply the voltage multiplied by the current (amps). One amp is equal to about 6.242 × 10^18 electrons per second moving through a point. And a single watt is equivalent to one joule of energy per second, but that doesn’t matter so much for our purposes.

Alright, now let’s get real. And who’s more real and had more opportunity to get electrocuted than Adam Savage from MythBusters? So we called and asked him just how much electricity you need to kill a human. His reply? “I’m about to freak you out.”

Seven milliamps. For three seconds. That’s all it takes. Electricity kills you by interrupting your heart rhythm. If 7 milliamps reaches your heart continuously for three seconds, “your heart goes arrhythmic,” he explained. Then everything else starts shutting down. “You could quite easily kill someone with a 9-volt or AAA battery directly to the heart.”

The reason electricity isn’t able to murder millions of people a day with ultra-tiny shocks is that our bodies have built-in resistance against electricity, so it doesn’t shoot straight to our heart. The skin’s resistance is about 5,000 to 15,000 ohms. Adam said that “it’s super difficult to quantify” precisely how much juice you need to break through, since there’s all kinds of variables in play, like the clothes you’re wearing. Not to mention, “how do you quantify that someone’s actually died?”

But if it’s any consolation, Adam says that the kind of static shock that actually stings your skin is about 20,000 volts—high voltage, just a really tiny amperage.

So the trick is getting the proper amount of power to cut through our skin and clothes and rubber-soled shoes to zap our heart. There’s a reliable way to do that: Lightning. With lightning, Adam said, “all bets are off.” A lightning bolt can hit over a billion volts. Air’s resistance, he explained, is about 10,000 volts per centimeter—so for electricity to move just 10cm through air requires 100,000 volts.

Machines could generate lightning artificially—this dude Charles Steinmetz built a lightning machine back in 1916 that generated over 10,000 amps and 100,000 volts. The reason some people survive is that they luck out with the path it takes through their body—so they might get scorched if it travels along the outside of their body, like if you’re wet, but if their heart goes untouched, they could come out alive.

That’s obviously wildly impractical—the sophistication and energy required for lightning-shooting machines would be more easily put toward acquiring nukes, a la every apocalyptic movie ever. Besides, there are far simpler machines that do a similar job when it comes to electrocuting people. Simple skin-penetrating Tasers already kill people occasionally. However, according to Adam, Tasers are designed with the 3-second-kill problem in mind—most pulse at much shorter intervals to avoid being fatal.

Still, we likely have little to fear from extinction by electrocution. With the exception of the admittedly clumsy electric chair, no one’s ever systematically killed people with electricity. Machines, if they were to develop a murderous intent, would most likely use all of the other ways humans have designed to kill each other.

Huge thanks to Adam Savage from MythBusters for helping us—or the machines?—out!

Still something you still wanna know? Send any questions about why I’ll never recover from Terminator: Salvation, electrifying puns or the pancake apocalypse to tips@gizmodo.com, with “Giz Explains” in the subject line.

Lawnmowers, Killer Bees and Fire: Five Tales of Mowing Madness

Who knew a machine with razor-sharp blades spinning at 200RPM you’re supposed to sit on top of might cause injury or death? Here are gruesome tales of mowing mishaps—from this past month alone!

Lawnmowers, with their spinning, ground level blades, are most dangerous to small animals, young children, and feet. Recently, one Mowing Menace trapped a 4-year-old girl’s foot under its blades of doom, causing enough damage to require amputation. In fact, she was one of 77,000 people who go to the hospital every year, victims of mowing-related violence.

Clearly, in the epic battle of Man vs. Machine, mowers don’t intend to play fair.

A mower in Oregon flipped its rider down an embankment and into a ditch before rolling itself onto some blackberry bushes above the trapped man. The lawn mower’s heat actually set the blackberry bushes on fire, and when they gave way, the mower itself tumbled 15-20 feet to rest on top of its owner, trapping him in the ditch. Though the victim wasn’t severely burned, the crushing weight of his mower caused enough unspecified injuries to necessitate a helicopter airlift to a nearby hospital.

Another one, at a park in Indiana, was being peacefully driven around the perimeter of a lake when it snagged a wire, flipped and slowly dragged its helpless rider into the water like a conniving, hungry alligator. Though the tractor technically did not devour the 59-year-old John McComas, it did pin him in the shallows of the lake, rendering him unable to move. Thankfully, he managed to keep his head above water and shouted for help, and was rescued soon enough to escape with only mild injuries.

A lawnmower in Florida apparently took offense to its owner doing a little repair work on it, and so shot a spark onto the owner’s nearby boat. The spark ignited gas fumes and the boat promptly burst into flames, sending up huge plumes of smoke and the risk of serious fire in the “tinderbox conditions” of that stretch of the Atlantic coastline. The town’s fire commissioner, Fred Link, explained with laughable naivete, “It was accidentally started.” Sure, Fred, that’s what they want you to think.

Lawnmowers don’t just act alone, though. They are capable of teaming up with other terrors to dish out even more devastation. In Texas, the mere sound of a lawn mower was enough to enrage a nearby swarm of killer Africanized bees. That’s right, Africanized bees, the ones the hysterical news media alerted your attention to back in 1999. The killer bees, responding to the mower’s calls, attacked nearby residents, stinging two bystanders and two firemen. None were seriously injured, and another fireman said he “barely managed to avoid being stung,” a quote he probably wishes had not appeared in his local paper. The bees were exterminated, but the mower lived to fight another day.

But just like in Battlestar Galactica, some of these appliances have decided to side with humans—defending them instead of terrorizing them. In Croatia, an innocent man was mowing his lawn when suddenly, his mower detonated a live hand grenade, sacrificing its own self in the process. The man escaped uninjured, but still confused as to what a live grenade was doing in his garden.

Fear the Pant Zipper

My childhood was active enough. I was as fearless as any toddler. I frolicked in the mud and climbed on now-banned metal jungle gyms. I was rambunctious. Then I met the pajamas with the feet.

At first I remember loving the idea. Those PJs were unassuming, but warm. Comforting. The itchy brown fabric was completely tolerable because it offered me spacesuit-like cocoon protection against those cold New England winters.

The gloriously padded feet sported rubber bottoms that provided me with just the right amount of grip for taking hairpin turns at the bottom of the foyer stairs and into the family room. Indeed, where socks would have sent me tumbling into the family’s ancient grandfather clock near the front door, these pajamas caught firmly, and allowed me to perform running maneuvers around the house that were the envy of my less fortunate and less pajama-fied best friends. I trusted that clothing absolutely and completely. In hindsight, such naivety was probably my downfall.

You see, I was young. The contraption on the front of these pajamas was alien to me. The zipper. I didn’t “get” it or how it managed to take two separate pieces of fabric and join them together. So, my mom had to help me get dressed.

At first, the arrangement was uneventful. Mom would hold up the pajamas with the feet like a NASA technician, and I would jump into them, eager to get it all over with so I could bolt down the stairs and orbit around the house at close to 10 mph. But before that, I would have to turn 180 degrees so mom could lock me in by pulling up the zipper. This is how things went for the first few months of winter. Jump in, turn, zip up, run away. Safe and sound.

But then one day, as I vaulted into those welcoming PJs with the feet, something was different. Perhaps mom had a bad day at the office. Or maybe it was the fact that it gets dark at 4 p.m. in Massachusetts during the winter, and she was depressed. I have no idea. Whatever it was, it had distracted mom to the point where she wasn’t taking into account all the variables in the task she was about to perform.

Son in PJs yet? Check.
Turn to face me? Check.
Grasp zipper? Check.
Execute zipper pull? Go for launch.

Missing in that checklist, of course, was any mention of my penis or its location at the time.

Now, before we get to the part that sends roughly 60% of Gizmodo’s audience into a pathetic fetal position, a brief aside. Many of you might think calling a mere zipper a “machine” is as big a stretch as any, but to that I say you’ve definitely never experienced what I, Ben Stiller’s character in There’s Something About Mary, or millions of other unfortunate men have experienced throughout history since the invention of the blasted zipper. Or you’re lying about having a penis.

Whatever your story is, I deliver this aside about “the machines” because, believe me, I’d take a run in with a T-600—flayed skin and personality disorder and all—over another run-in with that zipper any day of the week. Those teeth. That unforgiving gnashing sound as the mechanism slowly grated its way northward toward my junk. The muffled, organic yank the zipper made as it bit into my flesh. The Pinch. The—

Oh, I’m terribly sorry. I seem to have fallen out of my chair.

Anyway, to this day, some 20+ years later, I still subconsciously think of this story when I put on a pair of jeans, or do up a pair of slacks. Button flies are a godsend, in my opinion, and I was forever a changed man after that day. A little more tentative; a little more cautious. Especially with you know what.

Everything works fine now, I assure you, but those feeted pajamas went into the garbage that day so fast the plastic bag melted against the can. My mood at the time was the antithesis of that final scene from Terminator 2. Whereas John Connor wept, my relief at seeing that damned invention heading into oblivion was palpable.

The big bag of ice felt pretty damn good too.

Machines Behaving Deadly: A week exploring the sometimes difficult relationship between man and technology.

35 Robots That Are Even More Deadly Than Normal Robots

For this week’s Photoshop Contest, I asked you to design some super-deadly robots for our Machines Behaving Deadly week. Here’s hoping none of these terrorbots ever get made.

First Place — Adam Page
Second Place — Mark Fletcher
Third Place — Joey Del Real

Terminator Salvation Review: Better than T3 (But Not By Much)

In the future, if you’re walking around and encounter a Terminator, do not run.

Shout its model name at the top of your lungs “Teee EIGHT HUNDRED!!!” or “MOTO-TERMINATOR!!”, then run. That way the kiddies back in 2009 can Google for the proper toy.

The Terminator franchise has always been inherently ridiculous. We’re talking about killer robots that travel through time—without guns or clothes, of course—to not only destroy John Connor, leader of the Resistance, but take out his mom. (Destroying his mom’s mom, mom’s mom’s mom or anything along these genealogical lines would have been easier, but a bit too far-fetched.)

And that’s exactly my point. Our favorite, ridiculous franchises regularly walk precariously across that deep valley of ludicrousness, but instead of taking its chances on the tight rope like Star Trek did, Terminator Salvation double flips over the chasm on a motorcycle.

We’re talking 20-story robots that can creep up behind you without so much as a peep and supporting characters who nonchalantly demonstrate super heroic bodily feats without anyone ever asking “WTF?”

There are two story lines going on here. One, of John Connor, aka Batman. Seriously, he sounds just like Batman. Actually, he sounds like Batman for only the first few scenes of the film. Later, in scenes that, according to storyboards I saw during my set visit, were added after renegotiating with Bale for a bigger part, he sounds, you know, somewhat well-adjusted. It’s too bad that much of Bale’s own subplot, a yarn in which Connor painstakingly develops a frequency to deactivate Skynet killbots, is ended in unfulfilling resolution.

The other story is of Marcus. NOW THIS PART WILL BE A SPOILER IF YOU HAVEN’T WATCHED THE COMMERCIALS. BUT BECAUSE I ASSUME YOU WATCH COMMERCIALS, I’M NOT GOING TO FEEL TOO BAD FOR SAYING IT.

Marcus is a Terminator. Oh my God!

The problem with the movie is that too much of the story is of Marcus. The other problem of the movie is that too much of the story is of Marcus hopping from unexciting chase scene to unexciting chase scene. It’s a two-hour video game linking a series of sequences that have little reason for existence other than McG’s action-packed directing style.

And not action-packed like Charlie’s Angels. It’s a lot more like the so less charming, so less self-aware Charlie’s Angels 2: Full Throttle.

Sure, the sacred tome of Terminator 2 could also be regarded as a montage of chase scenes, but each chase scene forced you to hold your breath. In Terminator Salvation, a giant, Transformers-esque robot chases after a tow truck full of people. Then it deploys motorcycle Terminators. There are several cuts. Then the tow truck spins in such a way that its winch strikes one of the Terminators like a wrecking ball. On a bridge. There is also jet involvement.

Remember in T2, when the good old semi chased that kid on a motorbike? Man that was great.

The thing is, only…2/3 of Terminator Salvation is this depressing. When the Marcus and Connor storylines finally converge in a mad dash to blow Skynet away, the film hones in on what made the original movie and T2 great: The good old-fashioned Terminators, not new merchandizing opportunities or high octane thrill rides.

In this last act, we see Connor properly grown up, exploiting his full potential as a soldier/hacker who strikes the ideal equilibrium of previously mentioned ludicrousness. We see Marcus, while not a character we particularly care about, to be of a particularly interesting and justified existence. (Incidentally, Sam Worthington doesn’t play the role poorly. It’s the script/editing that lets him down.) And there’s a cameo that’s probably worth the price of the ticket alone. Scratch that, it is worth the price of the ticket alone.

Somewhere, deep inside, Terminator Salvation may be a good film. But it’s so unabashedly Hollywood, such a construct of too many artistic styles, storylines, chase scenes, contracts and heavy-handed metaphors—not to mention terrible script writing—that it may have simply forgotten how to be good. Quite simply, it’s just too busy being a movie to be entertaining.

T3 was a lousy film, but at least its fatalistic ending stuck with you. At the end of Terminator Salvation, I left the theater gagging on the world’s most expensive Hallmark card, questioning why I was supposed to give a damn in the first place.

For more on Terminator Salvation, read about our set visit.

Asimov’s Laws of Robotics Are Total BS

When people talk about robots and ethics, they always seem to bring up Isaac Asimov‘s “Three Laws of Robotics.” But there are three major problems with these laws and their use in our real world.

The Laws
Asimov’s laws initially entailed three guidelines for machines:
• Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
• Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
• Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
• Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The Debunk
The first problem is that the laws are fiction! They are a plot device that Asimov made up to help drive his stories. Even more, his tales almost always revolved around how robots might follow these great sounding, logical ethical codes, but still go astray and the unintended consequences that result. An advertisement for the 2004 movie adaptation of Asimov’s famous book I, Robot (starring the Fresh Prince and Tom Brady’s baby mama) put it best, “Rules were made to be broken.”

For example, in one of Asimov’s stories, robots are made to follow the laws, but they are given a certain meaning of “human.” Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as “human.” They follow the laws, but still carry out genocide.

The second problem is that no technology can yet replicate Asimov’s laws inside a machine. As Rodney Brooks of the company iRobot—named after the Asimov book, they are the people who brought you the Packbot military robot and the Roomba robot vacuum cleaner—puts it, “People ask me about whether our robots follow Asimov’s laws. There is a simple reason [they don’t]: I can’t build Asimov’s laws in them.”

Roboticist Daniel Wilson [and “Machines Behaving Deadly” contributor here at Gizmodo] was a bit more florid. “Asimov’s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?”

The most important reason for Asimov’s Laws not being applied yet is how robots are being used in our real world. You don’t arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!

The same goes to building a robot that takes order from any human. Do I really want Osama Bin Laden to be able to order about my robot? And finally, the fact that robots can be sent out on dangerous missions to be “killed” is often the very rationale to using them. To give them a sense of “existence” and survival instinct would go against that rationale, as well as opens up potential scenarios from another science fiction series, the Terminator movies. The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov’s laws. It explicitly wants robots that can kill, won’t take orders from just any human, and don’t care about their own existences.

A Question of Ethics
The bigger issue, though, when it comes to robots and ethics is not whether we can use something like Asimov’s laws to make machines that are moral (which may be an inherent contradiction, given that morality wraps together both intent and action, not mere programming).

Rather, we need to start wrestling with the ethics of the people behind the machines. Where is the code of ethics in the robotics field for what gets built and what doesn’t? To what would a young roboticists turn to? Who gets to use these sophisticated systems and who doesn’t? Is a Predator drone a technology that should just be limited to the military? Well, too late, the Department of Homeland Security is already flying six Predator drones doing border security. Likewise, many local police departments are exploring the purchase of their own drones to park over him crime neighborhoods. I may think that makes sense, until the drone is watching my neighborhood. But what about me? Is it within my 2nd Amendment right to have a robot that bears arms?

These all sound a bit like the sort of questions that would only be posed at science fiction conventions. But that is my point. When we talk about robots now, we are no longer talking about “mere science fiction” as one Pentagon analyst described of these technologies. They are very much a part of our real world.

Machines Behaving Deadly: A week exploring the sometimes difficult relationship between man and technology. Guest writer PW Singer is the author of Wired for War: The Robotics Revolution and Conflict in the 21st Century.