The Public “Conscience” and Killer Robots
At my side event presentation at the UN CCW Experts Meeting on Autonomous Weapons last month, I presented public opinion data showing strong US opposition to the idea of deploying such weapons. Since the panel was specifically focused on “morality and ethics” and since my remarks were on measuring the public “conscience” per se rather than public opinion in general, I re-examined my coding of open-ended comments with a view toward whether popular arguments for or against the use of autonomous weapons systems (AWS) were based on humanitarian principles or interest-based reasoning. At the Monkey Cage this week, I describe the results:
While both camps prioritize “saving lives,” humanitarian thinking per se is largely absent from explanations for opinions in favor of autonomous weapons. Rather, proponents of such weapons unflaggingly invoke national self-interest: the need to protect “our troops” from harm or “our national security” from robot arms races – arguments invoked as well by analysts and lawyers advocating such weapons. Only a small proportion of AWS proponents surveyed qualify this statement with concern for foreign civilians. And there is almost no sense among the U.S. public that autonomous weapons might actually be a viable means of reducing war crimes against foreign civilians – though this is a moral argument made by some proponents of AWS and, according to Zack Beauchamp, perhaps the most important question in the debate. Most arguments in favor of AWS by American voters are interest-based arguments based on the hope of saving American lives (though notably active-duty personnel in the survey did not agree with this thinking).
Read the whole thing here.