Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 1): References to “drivers” in the federal regulations

Editor’s Note: Apologies for the unannounced gap between posts.  I have been on parental leave for the past two weeks bonding with my newborn daughter.  In lieu of the traditional cartoon, I will be spamming you today with a photo of Julia (see bottom of post).  Now, back to AI.


The U.S. Department of Transportation recently released a report “identifying potential barriers and challenges for the certification of automated vehicles” under the current Federal Motor Vehicle Safety Standards (FMVSS).  Identifying such barriers is essential to the development and deployment of autonomous vehicles because the manufacturer of a new motor vehicle must certify that it complies with the FMVSS.

The FMVSS require American cars and trucks to include numerous operational and safety features, ranging from brake pedals to warning lights to airbags.  It also specifies test procedures designed to assess new vehicles’ safety and whether they comply with the FMVSS.

The new USDOT report consists of two components: (1) a review of the FMVSS “to identify which standards include an implicit or explicit reference to a human driver,” which the report’s authors call a driver reference scan; and (2) a review that evaluates the FMVSS against “13 different automated vehicle concepts, ranging from limited levels of automation . . . to highly automated, driverless concepts with innovative vehicle designs,” termed an automated vehicle concepts scan.  This post will address the driver reference scan, which dovetails nicely from my previous post on automated vehicles.

As noted in that post, the FMVSS defines a “driver” as “the occupant of a motor vehicle seated immediately behind the steering control system.”  It is clear both from this definition and from other regulations that “driver” thus refers to a human driver.  (And again, as explained in my previous post, the NHTSA’s recent letter to Google did not change this regulation or redefine “driver” under the FMVSS, media reports to the contrary notwithstanding.)  Any FMVSS reference to a “driver” thus presents a regulatory compliance challenge for makers of truly self-driving cars, since such vehicles may not have a human driver–or, in some cases, even a human occupant.

» Read more

Will technology send us stumbling into negligence?

Two stories that broke this week illustrate the hazards that can come from our ever-increasing reliance on technology.  The first story is about an experiment conducted at Georgia Tech where a majority of students disregarded their common sense and followed the path indicated by a robot wearing a sign that read “EMERGENCY GUIDE ROBOT”:

A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door. The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.

The vast majority of students–26 out of the 30 included in the experiment–went where the robot was pointing.  As it turned out, there was no exit in that direction.  The remaining four students either stayed in the room or were unable to complete the experiment.  No student, it seems, simply went out the way they came in.

Many of the students attributed their decision to disregard the correct exit to the “Emergency Guide Robot” sign, which suggested that the robot was specifically designed to tell them where to go in emergency situations.  According to the Georgia Tech researchers, these results suggest that people will “automatically trust” a robot that “is designed to do a particular task.”  The lead researcher analogized this trust “to the way in which drivers sometimes follow the odd routes mapped by their GPS devices,” saying that “[a]s long as a robot can communicate its intentions in some way, people will probably trust it in most situations.”

As if on cue, this happened the very same day that the study was released:

» Read more

No, NHTSA did not declare that AIs are legal drivers

Source: NDTV Car and Bike

Source: NDTV Car and Bike

A slew of breathless stories have been published over the past couple days saying that the National Highway Traffic Safety Administration (NHTSA*) has declared that “Google’s driverless cars are now legally the same as a human driver,” or that “A.I. in Autonomous Cars Can Legally Count as the Driver.”  These stories typically go on to say that NHTSA’s decision marks “a major step toward ultimately winning approval for autonomous vehicles on the roads” or something along those lines.  CNN went even further, saying that NHTSA “gave its OK to the idea of a self-driving car without a steering wheel and so forth, that cannot be controlled by a human driver.”

Unfortunately, these news stories badly misstate–or at the very least overstate–what NHTSA has actually said.  First, the letter written by NHTSA’s chief counsel that served as the main source for these news stories does not say that an AI system can be a “driver” under the NHTSA rule that defines that term.  It merely assumes that an AI system can be a legal driver for purposes of interpreting whether Google’s self-driving car would comply with several NHTSA rules and regulations governing the features and components of motor vehicles.  In the legal world, that assumption is very, very different from a ruling that the Google car’s AI system actually qualifies as a legal driver.

NHTSA indicated that it would initiate its formal rulemaking process to consider whether it should update the definition of “driver” in light of the “changing circumstances” presented by self-driving cars.  But federal agency rulemaking is a long and complicated process, and the letter makes clear that dozens of rule changes would have to be made before a car like Google’s could comply with NHTSA standards.  Far from marking a significant step toward filling our roads with robocars, the letter underscores just how many legal and regulatory hurdles will have to be cleared before autonomous vehicles can freely operate on American roads.

The NHTSA letter does not say that AIs are legal drivers

The basis for the recent barrage of news stories is a letter that NHTSA’s Chief Counsel sent to the head of Google’s Self-Driving Car Project in response to a long series of questions regarding whether the Google car would, as designed, comply with NHTSA regulations that refer in some way to the “driver” or “operator” of a motor vehicle.  NHTSA is the federal agency tasked with creating and enforcing design and manufacturing standards for cars, most notably with respect to safety features.  Unsurprisingly, most of the current standards–most of which have been around for decades–operate under the assumption that a human driver located in the front-left seat of the vehicle will be steering the car, applying the brakes, turning on the headlights, and so forth.  Many NHTSA vehicle standards therefore require that the vehicle’s major control mechanisms be physically accessible from the front-left seat.

» Read more