Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?

accountability-joke


The previous entry in this series examined why it would be very difficult to ensure that autonomous weapon systems (AWSs) consistently comply with the laws of war.  So what would happen if an attack by an AWS resulted in the needless death of civilians or otherwise constituted a violation of the laws of war?  Who would be held legally responsible?

In that regard, AWSs’ ability to operate free of human direction, monitoring, and control would raise legal concerns not shared by drones and other earlier generations of military technology.  It is not clear who, if anyone, could be held accountable if and when AWS attacks result in illegal harm to civilians and their property.  This “accountability gap” was the focus of a 2015 Human Rights Watch report.  The HRW report ultimately concluded that there was no plausible way to resolve the accountability issue and therefore called for a complete ban on fully autonomous weapons.

Although some commentators have taken issue with this prescription, the diagnosis seems to be correct—it simply is not clear who could be held responsible if an AWS commits an illegal act.  This accountability gap exists because AWSs incorporate AI technology could collect information and determine courses of action based on the conditions in which they operate.  It is unlikely that even the most careful human programmers could predict the nearly infinite on-the-ground circumstances that an AWS could face.  It would therefore be difficult for an AWS designer–to say nothing of its military operators–to foresee how the AWS would react in the fluid, fast-changing world of combat operations.  The inability to foresee an AWS’s actions would complicate the assignment of legal responsibility.

» Read more