Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 1): References to “drivers” in the federal regulations

Editor’s Note: Apologies for the unannounced gap between posts.  I have been on parental leave for the past two weeks bonding with my newborn daughter.  In lieu of the traditional cartoon, I will be spamming you today with a photo of Julia (see bottom of post).  Now, back to AI.


The U.S. Department of Transportation recently released a report “identifying potential barriers and challenges for the certification of automated vehicles” under the current Federal Motor Vehicle Safety Standards (FMVSS).  Identifying such barriers is essential to the development and deployment of autonomous vehicles because the manufacturer of a new motor vehicle must certify that it complies with the FMVSS.

The FMVSS require American cars and trucks to include numerous operational and safety features, ranging from brake pedals to warning lights to airbags.  It also specifies test procedures designed to assess new vehicles’ safety and whether they comply with the FMVSS.

The new USDOT report consists of two components: (1) a review of the FMVSS “to identify which standards include an implicit or explicit reference to a human driver,” which the report’s authors call a driver reference scan; and (2) a review that evaluates the FMVSS against “13 different automated vehicle concepts, ranging from limited levels of automation . . . to highly automated, driverless concepts with innovative vehicle designs,” termed an automated vehicle concepts scan.  This post will address the driver reference scan, which dovetails nicely from my previous post on automated vehicles.

As noted in that post, the FMVSS defines a “driver” as “the occupant of a motor vehicle seated immediately behind the steering control system.”  It is clear both from this definition and from other regulations that “driver” thus refers to a human driver.  (And again, as explained in my previous post, the NHTSA’s recent letter to Google did not change this regulation or redefine “driver” under the FMVSS, media reports to the contrary notwithstanding.)  Any FMVSS reference to a “driver” thus presents a regulatory compliance challenge for makers of truly self-driving cars, since such vehicles may not have a human driver–or, in some cases, even a human occupant.

» Read more

Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons

Dilbert

Source: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams


An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors. It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without violating the laws of war. An American Lt. General then orders that 50 of these units be deployed during a conflict in the Persian Gulf for use in ongoing urban combat in several cities. One of those units had previously seen action in urban combat in the Caucuses and desert combat during the same Persian Gulf conflict, all without incident. A Major makes the decision to deploy that AWS unit to assist a platoon engaged in block-to-block urban combat in Sana’a. Once the AWS unit is on the ground, a Lieutenant is responsible for telling the AWS where to go. The Lt. General, the Major, and the Lieutenant all had previous experience using this model of AWS and had given similar orders to these in prior combat situations without incident.

The Lieutenant has lost several men to enemy snipers over the past several weeks. He orders the AWS to accompany one of the squads under his command and preemptively strike any enemy sniper nests it detects–again, an order he had given to other AWS units before without incident. This time, the AWS unit misidentifies a nearby civilian house as containing a sniper nest, based on the fact that houses with similar features had frequently been used as sniper nests in the Caucuses conflict. It launches a RPG at the house. There are no snipers inside, but there are 10 civilians–all of whom are killed by the RPG. Human soldiers who had been fighting in the area would have known that that particular house likely did not contain a sniper’s nest because the glare from the sun off a nearby glass building reduces visibility on that side of the street at the times of day that American soldiers typically patrol the area–a fact that the human soldiers knew well from prior combat in the area, but a variable that the AWS had not been programmed to take into consideration.

In my most recent post for FLI on autonomous weapons, I noted that it would be difficult for humans to predict the actions of autonomous weapon systems (AWSs) programmed with machine learning capabilities.  If the military commanders responsible for deploying AWSs were unable to reliably foresee how the AWS would operate on the battlefield, it would be difficult to hold those commanders responsible if the AWS violates the law of armed conflict (LOAC).  And in the absence of command responsibility, it is not clear whether any human could be held responsible under the existing LOAC framework.

A side comment from a lawyer on Reddit made me realize that my reference to “foreseeability” requires a bit more explanation.  “Foreseeability” is one of those terms that makes lawyers’ ears perk up when they hear it because it’s a concept that every American law student encounters when learning the principles of negligence in their first-year class on Tort Law.

» Read more

On subjectivity, AI systems, and legal decision-making

Source: Dilbert

The latest entry in my series of posts on autonomous weapon systems (AWSs) suggested that it would be exceedingly difficult to ensure that AWSs complied with the laws of war.  A key reason for this difficulty is that the laws of war depend heavily on subjective determinations.  One might easily expand this point and argue that AI systems cannot–or should not–make any decisions that require interpreting or applying law because such legal determinations are inherently subjective.

Ever the former judicial clerk, I can’t resist pausing for a moment to define my terms.  “Subjective” can have subtly different meanings depending on the context.  Here, I’m using the term to mean something that is a matter of opinion rather than a matter of fact.  In law, I would say that identifying what words are used in the Second Amendment is an objective matter; discerning what those words mean is a subjective matter.  All nine justices who decided DC v. Heller (and indeed, anyone with access to an accurate copy of the Bill of Rights) agreed that the Second Amendment reads: “A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.”  They disagreed quite sharply about what those words mean and how they relate to each other.  (Legal experts even disagree on what the commas in the Second Amendment mean).

Given that definition of “subjective,” here are some observations.

» Read more

Who’s to Blame (Part 2): What is an “autonomous” weapon?

pe160131

Source: Peanuts by Charles Schulz Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of an weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting the machine down if the system malfunctions.

» Read more