No, NHTSA did not declare that AIs are legal drivers

Source: NDTV Car and Bike

Source: NDTV Car and Bike

A slew of breathless stories have been published over the past couple days saying that the National Highway Traffic Safety Administration (NHTSA*) has declared that “Google’s driverless cars are now legally the same as a human driver,” or that “A.I. in Autonomous Cars Can Legally Count as the Driver.”  These stories typically go on to say that NHTSA’s decision marks “a major step toward ultimately winning approval for autonomous vehicles on the roads” or something along those lines.  CNN went even further, saying that NHTSA “gave its OK to the idea of a self-driving car without a steering wheel and so forth, that cannot be controlled by a human driver.”

Unfortunately, these news stories badly misstate–or at the very least overstate–what NHTSA has actually said.  First, the letter written by NHTSA’s chief counsel that served as the main source for these news stories does not say that an AI system can be a “driver” under the NHTSA rule that defines that term.  It merely assumes that an AI system can be a legal driver for purposes of interpreting whether Google’s self-driving car would comply with several NHTSA rules and regulations governing the features and components of motor vehicles.  In the legal world, that assumption is very, very different from a ruling that the Google car’s AI system actually qualifies as a legal driver.

NHTSA indicated that it would initiate its formal rulemaking process to consider whether it should update the definition of “driver” in light of the “changing circumstances” presented by self-driving cars.  But federal agency rulemaking is a long and complicated process, and the letter makes clear that dozens of rule changes would have to be made before a car like Google’s could comply with NHTSA standards.  Far from marking a significant step toward filling our roads with robocars, the letter underscores just how many legal and regulatory hurdles will have to be cleared before autonomous vehicles can freely operate on American roads.

The NHTSA letter does not say that AIs are legal drivers

The basis for the recent barrage of news stories is a letter that NHTSA’s Chief Counsel sent to the head of Google’s Self-Driving Car Project in response to a long series of questions regarding whether the Google car would, as designed, comply with NHTSA regulations that refer in some way to the “driver” or “operator” of a motor vehicle.  NHTSA is the federal agency tasked with creating and enforcing design and manufacturing standards for cars, most notably with respect to safety features.  Unsurprisingly, most of the current standards–most of which have been around for decades–operate under the assumption that a human driver located in the front-left seat of the vehicle will be steering the car, applying the brakes, turning on the headlights, and so forth.  Many NHTSA vehicle standards therefore require that the vehicle’s major control mechanisms be physically accessible from the front-left seat.

The NHTSA regulations include a “Definitions” section, Section 571.3, that defines “driver” as “the occupant of a motor vehicle seated immediately behind the steering control system.”  In its initial letter to NHTSA, Google argued that it would be somewhat nonsensical to say that a human inside the vehicle is the “driver” of its car regardless of where that human is “seated,” because the Google car has been designed to give its onboard AI system complete steering control and, indeed, control over all driving operations.  NHTSA agreed, saying:

No human occupant of the SDV could meet the definition of “driver” in Section 571.3 given Google’s described motor vehicle design–even if it were possible for a human occupant to determine the location of Google’s steering control system, and sit “immediately behind” it, that human occupant would not be capable of actually driving the vehicle as described by Google.  If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the “driver” as whatever (as opposed to whoever) is doing the driving.  In this instance, an item of motor vehicle equipment, the SDS [self-driving system], is actually driving the vehicle.

The various news stories covering this letter plucked the final sentence of the above quote out of context and misconstrued it as a formal ruling on whether Google’s AI system is a “driver” under NHTSA’s definition of that term.  But viewed in its full context, NHTSA plainly was not ruling that the AI system would be considered the legal driver.  Instead, it was concluding that a human could not be considered the Google car’s legal driver because it is the AI system rather than any human that actually controls the car’s operations.

Earlier in the same paragraph, NHTSA said that it would treat the AI system as the driver as “a foundational starting point for the interpretations” of NHTSA rules that Google had requested.  Here too, if the “foundational starting point” language is viewed only in isolation, this can sound like a formal ruling on how NHTSA would view a self-driving car’s AI system.  But once again, it is clear from context that NHTSA merely assumed for the sake of argument that the AI system could be considered the legal driver and then interpreted a number of its rules using that assumption.  That narrow assumption has no legal force and is not a “ruling.”

The very next paragraph underscores that NHTSA was not rewriting its own rules in this letter.  There, the letter says that “NHTSA will consider initiating rulemaking to address whether the definition of ‘driver’ in Section 571.3 should be updated in response to changing circumstances.”  In other words, the definition of driver would have to be amended through the formal rulemaking process to take into account the new reality of cars that can be controlled by AI systems.  Tellingly, not a single one of the news stories covering NHTSA’s letter quotes that sentence–nor, as far as I can tell, did any of them contact NHTSA or a lawyer with expertise in administrative law for comment.

Many laws will have to be rewritten before self-driving cars hit the roads

NHTSA also made it clear that many NHTSA rules that do not even contain the word “driver” would have to be completely rewritten in order for a self-driving car to comply with them.  For example, Google’s car design does not include “foot control” service brakes or “a hand or foot control” parking brake, both of which are required under NHTSA regulations.  Google asked NHTSA to interpret the “foot” and/or “hand” control brake requirements as simply being inapplicable and irrelevant to self-driving cars.

C&H

Source: Calvin and Hobbes Comic Strip, February 17, 2016 on GoComics.com

But NHTSA said it could not do that because the written rules clearly require that human-appendage-controlled brakes be present; NHTSA cannot just pretend those rules do not exist.  Even assuming that a reasonable interpretation of “driver” might include a self-driving car’s AI system, no reasonable interpretation of “foot control” brakes could include brakes that cannot be controlled by a foot.  So unless a human is given control of braking in Google’s car (which would create safety issues) or the Google car’s AI system grows feet, the Google car would fail to meet NHTSA standards.  The letter contains a number of other examples like this–rules that explicitly require a physical feature (such as a turn signal that can be deactivated by rotating a steering wheel) that are not part of Google’s self-driving car and that could not be added safely without returning control over driving back to a human.

And even for those rules that do not necessarily require physical features, NHTSA cautioned that it has not developed “standard[s] and testing procedures” that would allow NHTSA to confirm that a self-driving car meets NHTSA standards.  For example, NHTSA has rules regarding rear visibility premised on a human driver who uses rear-view mirrors.  There are standard procedures that NHTSA has developed to determine whether a car design provides a human driver with adequate rear visibility.  NHTSA has not, however, developed rules to test whether a self-driving car’s sensors and cameras would be adequate to ensure that the AI system can perceive objects behind it.  Until such testing procedures are established, NHTSA would have no way to confirm the Google car’s compliance with its standards.  That would leave Google’s car in a legally precarious position, dependent on getting (and keeping) exemptions from dozens of NHTSA rules.

Finally, the news stories tend to overstate the importance of NHTSA when it comes to rules governing who (or what) can be a driver.  Each state gets to determine for itself who (or what) gets to drive within its borders.  That’s why a 15 year old can drive unsupervised in Idaho but not next door in Oregon or Washington.  Even if NHTSA were to say that Google’s design for its car complies with NHTSA standards, that would not mean that Google’s car would actually be able to operate on the road because the states would still have to change their laws to allow the licensing of autonomous vehicles.


When laws on the books fail to keep up with the times, courts and agencies often look for creative ways to fit new square pegs into the round holes of existing law.  But as the NHTSA letter demonstrates, that can only get you so far.  Many laws and rules will have to be changed in order for self-driving cars to comply with them.  And considering the thousands of pages of rules governing drivers and motor vehicles that are on the books across the U.S., that is going take awhile.

(* The proper short form for the National Highway Traffic Safety Administration is “NHTSA” rather than “the NHTSA.”  Don’t ask me why.)