On subjectivity, AI systems, and legal decision-making

Source: Dilbert


The latest entry in my series of posts on autonomous weapon systems (AWSs) suggested that it would be exceedingly difficult to ensure that AWSs complied with the laws of war.  A key reason for this difficulty is that the laws of war depend heavily on subjective determinations.  One might easily expand this point and argue that AI systems cannot–or should not–make any decisions that require interpreting or applying law because such legal determinations are inherently subjective.

Ever the former judicial clerk, I can’t resist pausing for a moment to define my terms.  “Subjective” can have subtly different meanings depending on the context.  Here, I’m using the term to mean something that is a matter of opinion rather than a matter of fact.  In law, I would say that identifying what words are used in the Second Amendment is an objective matter; discerning what those words mean is a subjective matter.  All nine justices who decided DC v. Heller (and indeed, anyone with access to an accurate copy of the Bill of Rights) agreed that the Second Amendment reads: “A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.”  They disagreed quite sharply about what those words mean and how they relate to each other.  (Legal experts even disagree on what the commas in the Second Amendment mean).

Given that definition of “subjective,” here are some observations.

» Read more

Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict?

“Robots won’t commit war crimes. We just have to program them to follow the laws of war.” This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers. But designing such autonomous weapon systems (AWSs) is far easier said than done. True, if we could design and program AWSs that always obeyed the international law of armed conflict (LOAC), then the issues raised in the previous segment of this series — which suggested the need for human direction, monitoring, and control of AWSs — would be completely unfounded. But even if such programming prowess is possible, it seems unlikely to be achieved anytime soon. Instead, we need be prepared for powerful AWS that may not recognize where the lines blur between what is legal and reasonable during combat and what is not.

While the basic LOAC principles seem straightforward at first glance, their application in any given military situation depends heavily on the specific circumstances in which combat takes place. And the difference between legal and illegal acts can be blurry and subjective. It therefore would be difficult to reduce the laws and principles of armed conflict into a definite and programmable form that could be encoded into the AWS and, from which the AWS could consistently make battlefield decisions that comply with the laws of war.

» Read more

No, NHTSA did not declare that AIs are legal drivers

Source: NDTV Car and Bike

Source: NDTV Car and Bike

A slew of breathless stories have been published over the past couple days saying that the National Highway Traffic Safety Administration (NHTSA*) has declared that “Google’s driverless cars are now legally the same as a human driver,” or that “A.I. in Autonomous Cars Can Legally Count as the Driver.”  These stories typically go on to say that NHTSA’s decision marks “a major step toward ultimately winning approval for autonomous vehicles on the roads” or something along those lines.  CNN went even further, saying that NHTSA “gave its OK to the idea of a self-driving car without a steering wheel and so forth, that cannot be controlled by a human driver.”

Unfortunately, these news stories badly misstate–or at the very least overstate–what NHTSA has actually said.  First, the letter written by NHTSA’s chief counsel that served as the main source for these news stories does not say that an AI system can be a “driver” under the NHTSA rule that defines that term.  It merely assumes that an AI system can be a legal driver for purposes of interpreting whether Google’s self-driving car would comply with several NHTSA rules and regulations governing the features and components of motor vehicles.  In the legal world, that assumption is very, very different from a ruling that the Google car’s AI system actually qualifies as a legal driver.

NHTSA indicated that it would initiate its formal rulemaking process to consider whether it should update the definition of “driver” in light of the “changing circumstances” presented by self-driving cars.  But federal agency rulemaking is a long and complicated process, and the letter makes clear that dozens of rule changes would have to be made before a car like Google’s could comply with NHTSA standards.  Far from marking a significant step toward filling our roads with robocars, the letter underscores just how many legal and regulatory hurdles will have to be cleared before autonomous vehicles can freely operate on American roads.

The NHTSA letter does not say that AIs are legal drivers

The basis for the recent barrage of news stories is a letter that NHTSA’s Chief Counsel sent to the head of Google’s Self-Driving Car Project in response to a long series of questions regarding whether the Google car would, as designed, comply with NHTSA regulations that refer in some way to the “driver” or “operator” of a motor vehicle.  NHTSA is the federal agency tasked with creating and enforcing design and manufacturing standards for cars, most notably with respect to safety features.  Unsurprisingly, most of the current standards–most of which have been around for decades–operate under the assumption that a human driver located in the front-left seat of the vehicle will be steering the car, applying the brakes, turning on the headlights, and so forth.  Many NHTSA vehicle standards therefore require that the vehicle’s major control mechanisms be physically accessible from the front-left seat.

» Read more

Who’s to Blame (Part 2): What is an “autonomous” weapon?

Source: Peanuts by Charles Schulz Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of an weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting the machine down if the system malfunctions.

» Read more

1 2