Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons
Source: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams
An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors. It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without violating the laws of war. An American Lt. General then orders that 50 of these units be deployed during a conflict in the Persian Gulf for use in ongoing urban combat in several cities. One of those units had previously seen action in urban combat in the Caucuses and desert combat during the same Persian Gulf conflict, all without incident. A Major makes the decision to deploy that AWS unit to assist a platoon engaged in block-to-block urban combat in Sana’a. Once the AWS unit is on the ground, a Lieutenant is responsible for telling the AWS where to go. The Lt. General, the Major, and the Lieutenant all had previous experience using this model of AWS and had given similar orders to these in prior combat situations without incident.
The Lieutenant has lost several men to enemy snipers over the past several weeks. He orders the AWS to accompany one of the squads under his command and preemptively strike any enemy sniper nests it detects–again, an order he had given to other AWS units before without incident. This time, the AWS unit misidentifies a nearby civilian house as containing a sniper nest, based on the fact that houses with similar features had frequently been used as sniper nests in the Caucuses conflict. It launches a RPG at the house. There are no snipers inside, but there are 10 civilians–all of whom are killed by the RPG. Human soldiers who had been fighting in the area would have known that that particular house likely did not contain a sniper’s nest because the glare from the sun off a nearby glass building reduces visibility on that side of the street at the times of day that American soldiers typically patrol the area–a fact that the human soldiers knew well from prior combat in the area, but a variable that the AWS had not been programmed to take into consideration.
In my most recent post for FLI on autonomous weapons, I noted that it would be difficult for humans to predict the actions of autonomous weapon systems (AWSs) programmed with machine learning capabilities. If the military commanders responsible for deploying AWSs were unable to reliably foresee how the AWS would operate on the battlefield, it would be difficult to hold those commanders responsible if the AWS violates the law of armed conflict (LOAC). And in the absence of command responsibility, it is not clear whether any human could be held responsible under the existing LOAC framework.
A side comment from a lawyer on Reddit made me realize that my reference to “foreseeability” requires a bit more explanation. “Foreseeability” is one of those terms that makes lawyers’ ears perk up when they hear it because it’s a concept that every American law student encounters when learning the principles of negligence in their first-year class on Tort Law.