Law and AI notes: Kay Firth-Butterfield on the NHTSA letter, Moshe Vardi on AI-induced inequality, and more

Some brief Presidents Day updates on happenings in the AI law and policy world:

  • Kay Firth-Butterfield has written an excellent summary of the recent NHTSA letter regarding Google’s self-driving car on the blog for Lucid AI.  She emphasizes NHTSA’s limited role and concludes that the letter “does not change the law but provides an interpretation or an ‘agency’s view,’” and similar to the sentiments expressed in Friday’s post, attributes the “media’s erroneous statements around the ‘approval’ of SRS as ‘driver’” to “the simple fact that NHTSA has to ‘assume’ that the SRS is a driver of the car in order to answer the letter from Google.”
  • Statements by several noted AI academics at an AAAS conference has generated renewed buzz about the potential impact of AI on the labor market.  Moshe Vardi warned that AI could exacerbate economic inequality and expressed concern that the issue is “nowhere on the radar screen” among policymakers even though it’s a presidential election year.
  • The New York Times has a short op-ed by Justin Long (not the actor) on his new AI-based dating algorithm that he says will “streamline and improve the online matching process” by automating “selection and basic introductory conversations.” (I guess swiping right requires too much effort?)  No word on who would be liable if the AI sets you up on a horrible blind date.
  • Law and AI now is on Twitter @AiPolicy

No, NHTSA did not declare that AIs are legal drivers

Source: NDTV Car and Bike

Source: NDTV Car and Bike

A slew of breathless stories have been published over the past couple days saying that the National Highway Traffic Safety Administration (NHTSA*) has declared that “Google’s driverless cars are now legally the same as a human driver,” or that “A.I. in Autonomous Cars Can Legally Count as the Driver.”  These stories typically go on to say that NHTSA’s decision marks “a major step toward ultimately winning approval for autonomous vehicles on the roads” or something along those lines.  CNN went even further, saying that NHTSA “gave its OK to the idea of a self-driving car without a steering wheel and so forth, that cannot be controlled by a human driver.”

Unfortunately, these news stories badly misstate–or at the very least overstate–what NHTSA has actually said.  First, the letter written by NHTSA’s chief counsel that served as the main source for these news stories does not say that an AI system can be a “driver” under the NHTSA rule that defines that term.  It merely assumes that an AI system can be a legal driver for purposes of interpreting whether Google’s self-driving car would comply with several NHTSA rules and regulations governing the features and components of motor vehicles.  In the legal world, that assumption is very, very different from a ruling that the Google car’s AI system actually qualifies as a legal driver.

NHTSA indicated that it would initiate its formal rulemaking process to consider whether it should update the definition of “driver” in light of the “changing circumstances” presented by self-driving cars.  But federal agency rulemaking is a long and complicated process, and the letter makes clear that dozens of rule changes would have to be made before a car like Google’s could comply with NHTSA standards.  Far from marking a significant step toward filling our roads with robocars, the letter underscores just how many legal and regulatory hurdles will have to be cleared before autonomous vehicles can freely operate on American roads.

The NHTSA letter does not say that AIs are legal drivers

The basis for the recent barrage of news stories is a letter that NHTSA’s Chief Counsel sent to the head of Google’s Self-Driving Car Project in response to a long series of questions regarding whether the Google car would, as designed, comply with NHTSA regulations that refer in some way to the “driver” or “operator” of a motor vehicle.  NHTSA is the federal agency tasked with creating and enforcing design and manufacturing standards for cars, most notably with respect to safety features.  Unsurprisingly, most of the current standards–most of which have been around for decades–operate under the assumption that a human driver located in the front-left seat of the vehicle will be steering the car, applying the brakes, turning on the headlights, and so forth.  Many NHTSA vehicle standards therefore require that the vehicle’s major control mechanisms be physically accessible from the front-left seat.

» Read more

Who’s to Blame (Part 2): What is an “autonomous” weapon?

Source: Peanuts by Charles Schulz Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of an weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting the machine down if the system malfunctions.

» Read more

AI in the Legal Workplace: Collaboration or Competition?

Source: Dilbert


As AI systems become more widespread and versatile, they will undoubtedly have a major impact on our workforce and economy.  On a macro scale–that is, across the labor market as a whole–whether AI’s impact will be positive or negative is very much an open debate.  The same is true of the impact of AI on many specific occupations.  Roughly half of jobs in the United States are “vulnerable” to automation, according to a 2013 study.  But whether AI systems will prove “good” or “bad” for workers in a specific profession will depend in large part on whether AI serves as complement to human workers or acts as a replacement for them.

In the legal profession, for instance, the rise of predictive coding and improved scan-and-search software has given law firms the option of automating some of the most time-consuming (and therefore expensive) aspects of identifying relevant documents during litigation, a.k.a. document review.  Document review has long been bread-and-butter work for young lawyers, especially at law firms that handle complex litigation cases, which can require sifting through and poring over thousands or even millions of pages of documents.

» Read more

Who’s to Blame (Part 1): Law, Accountability, and Autonomous Weapons (A Brief Introduction)

Editor’s Note: This is the first entry in a weekly series of posts that I am writing for the Future of Life Institute regarding the legal vacuum surrounding autonomous weapons.  This entry is cross-posted on FLI’s website.  Subsequent posts in this series cover the definition of “autonomous” in the context of weapons, the reasons why the deployment of AWSs could lead to violations of the laws of armed conflict, the accountability problem that autonomous weapons would present (including a deeper look at the problem of foreseeing what an AWS will do), and potential legal approaches to autonomous weapons.


The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza.  In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system.  Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources.  Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched.  The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable.  Unfortunately, no one can agree on who that is…

Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans.  But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm.  Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum.  The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief.  AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that.  But applying the laws of armed conflict to attacks initiated by machines is no simple matter.

The core principles of the laws of armed conflict are straightforward enough.  Most important to the AWS debate, attackers must distinguish between civilians and combatants, strike only when it is actually necessary to a legitimate military purpose, and refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained.  But what if the attacker is a machine?  How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary?  Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?”  Does the law permit anyone other than a human being to make that kind of determination?  Should it?

But the issue goes even deeper than simply determining whether the laws of war can be encoded into the AI components of an AWS.  Even if everyone agreed that a particular AWS attack constituted a war crime, would our sense of justice be satisfied by “punishing” that machine?  I suspect that most people would answer that question with a resounding “no.”  Human laws demand human accountability.  Unfortunately, as of right now, there are no laws at the national or international level that specifically address whether, when, or how AWSs can be deployed, much less who (if anyone) can be held legally responsible if an AWS commits an act that violates the laws of armed conflict.  This makes it difficult for those laws to have the deterrent effect that they are designed to have; if no one will be held accountable for violating the law, then no one will feel any particular need to ensure compliance with the law.  On the other hand, if there are human(s) with a clear legal responsibility to ensure that an AWS’s operations comply with the laws of war, then horrors such as the hospital bombing described in the intro this essay would be much less likely to come to fruition.

So how should the legal voids surrounding autonomous weapons–and for that matter, AI in general–be filled?  Over the coming weeks and months, that question–along with the other questions raised in this essay–will be examined in greater detail on the website of the Future of Life Institute and on the Law and AI blog.  Stay tuned.

1 5 6 7