Robotics technology is evolving so quickly that new robots are being put to use before the potential consequences can be understood. Stock market traders, automated surgeons and robots like Boston Dynamic’s Big Dog have been welcomed into society while the ethics of robots doing human work have not yet been fleshed out. Should anything go wrong with these robots, there could be serious consequences for which no one has planned. For some robot technologies in particular, the stakes might even be too high to risk.
Realizing the gravity of the situation, the Stop Killer Robots global campaign has a plan. The campaign, whose leaders include interest groups, academic scholars and Nobel Peace Prize laureates, is planned for release on April 23, 2013. The campaign will be brought before the House of Commons in the U.K. with the hopes of globally banning fully autonomous weapons in combat.
Interest groups in the campaign have had similar successes in the past with banning land mines. Five years after its inception in 1992, the International Campaign to Ban Landmines had its greatest success when in 1997, land mines were banned from production and use. Stop Killer Robots is hoping for a similar fate for autonomous robots.
The minds behind the campaign have worries that are not entirely new. Most robotics technology, as it tries to take the place of humans, raises a plethora of ethical and moral concerns that have yet to be addressed on the global scale. Who is responsible for the actions of a robot? If a robot makes a mistake, who is to be blamed? We certainly cannot blame the robot. The company who manufactured the robot, perhaps, or the government who used it, in some cases, seems more appropriate, but still tricky. For moral crimes committed by a robot, the course of action is unclear.
With current technology, these concerns are warranted. Autonomous robots are far from being perfect in their detection abilities. Even with the most advanced equipment, robots can’t yet understand subtle movements and have no sense of intent. Dr. Noel Sharkey, a professor and robotics expert, claims that robots can hardly “distinguish between a human being and a car,” let alone tell the difference between “a child holding up a sweet and an adult pointing a gun.” Clearly, autonomous robots still are capable of making mistakes for which we must be prepared.
Stop Killer Robots isn’t the first group to voice their concern over autonomous robots in combat. In November 2012, Human Rights Watch (HRW) published a fifty page report detailing their findings on the legality and challenges of autonomous robots, as well as their recommendations for the future. Citing conflicts with the International Humanitarian Law, unaccountability of unmanned robots, and threats to non-violent citizens, Human Rights Watch recommended that all states firmly “prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.” They also made a recommendation to roboticists, to establish and enforce a professional code regarding the “research and development of autonomous weapons […] in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered.”
It is this last recommendation by HRW that is the most important. Technology in general isn’t to be feared. The use of robotics in combat can be beneficial to the safety of soldiers and civilians when used carefully and appropriately. When proper research and consideration are not present from the start, however, the high stakes of autonomous robots become very concerning. Whether banning all autonomous robots from any global combat is a reasonable goal for the Stop Killer Robots campaign remains to be seen. That governments and manufacturers need to work together to conduct more careful research and analysis before dangerous robots hit the field is obvious.