Fuelling the Modern Man

Robotics experts admit: “It’s too late to stop killer robots.”

Don’t get planning for The Matrix just yet — scientists think they have a solution.

Josh Butler
Published 10th November 2016

The topic of killer robots is usually reserved for the domain of science-fiction. Whether we’re talking James Cameron’s Terminator, or the androids Rick Deckhard hunts in Blade Runner, we don’t really think of autonomous killing machines entering into our everyday lives.


Well, hold that thought.

After increasing calls for a preemptive ban on the research and manufacture of fully autonomous killing robots, scientists at Buffalo University have boldly claimed that a ban wouldn’t halt the issue at all – because they already exist.

They claim the technology required has already been developed – evidenced by the $18 million budget granted to the research of autonomous technology by The Pentagon – and that now humans should be more concerned with how the robots are programmed.

Instead of banning the autonomous robots and hoping for the best, the team claims in its latest published paper that the issue would not be the end product of the robots themselves, but the ethics programmed into them.

A soldier looks at a MAARS (Modular Advanced Armed Robotic System) robot at the Marine West Military Expo at Camp Pendleton, California February 1, 2012. The expo is one of the year's largest annual display of new military equipment. REUTERS/Mike Blake (UNITED STATES - Tags: MILITARY SCIENCE TECHNOLOGY)

“Previously, humans have had the agency on the battlefield to pull the trigger, but what happens when this agency is given to a robot and because of its complexity we can’t even trace why particular decisions are made in particular situations?”

“Consider how both software and ethical systems operate on certain rules,” it reads.

“The distinctions between combatant and non-combatant, human and machine, life and death are not drawn by a robot.”


Essentially, instead of focusing on what is technologically possible, the research team are arguing that the key to ensuring autonomous killing robots do not plunge us into The Matrix is by creating a strict code of ethics by which they are governed.

The same sort of code that is programmed into autonomous cars so they know when to stop, turn, yield or proceed.


These fears seem to echo eminent scientist Sir Steve Hawking’s fears that humanity’s gravest threat will be posed by machinery they themselves created.

“Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

“And in the future, AI could develop a will of its own – a will that is in conflict with ours.”