The first casualty of vehicle automation

Tesla’s Autopilot


Without a doubt, the biggest thing in the news this week from a Law and AI perspective is that a Tesla Model S driver was killed while the vehicle had Tesla’s “Autopilot” feature activated. This is the first–or at least the first widely reported–fatality caused by a vehicle that, for all practical purposes, had an AI driver in control of the vehicle. The big question seems to be whether the deceased driver misused the Autopilot feature when he gave it unfettered control over the vehicle.

First rolled out by Tesla last year, Autopilot is probably the most advanced suite of self-driving technologies to date among automobiles available to consumers.  Autopilot was made available to drivers while it was still in its real-world testing or “beta” phase.  Making products that are in “beta” available to consumers while the kinks are still getting worked out is par for the course in the tech industry.  But in the auto industry?  Not so much.  In that world, it is a ballsy move to make a system that performs safety-critical functions available to drivers on the road while pretty much explicitly admitting that it has not yet been thoroughly tested.

In a press release regarding the accident, Tesla points out that in order to activate Autopilot, drivers have to explicitly acknowledge that (1) the system is in beta and (2) they have to keep their hands on the steering wheel.  Moreover, the press release states that the system “makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”

That all certainly helps from a legal liability standpoint.  But accidents can happen in a split second when a driver takes his or her hands off the wheel or eyes off the road.  Alerts and cues might not be effective when an accident unfolds quickly.  Another interesting (albeit unrelated) issue is whether the fact that the system is named “Autopilot” might have given drivers the impression that the system had greater capabilities than it actually did, thus undercutting the effectiveness of the disclaimers, alerts, and cues.

I’ll stop here for now because I hesitate to give a deep legal analysis based on fairly early press reports.  My gut actually tells me that Tesla is probably on solid ground from a legal liability standpoint.  And on the aggregate, I have no doubt that Autopilot is no more dangerous than a human driver, and that greater automation will increase driver safety.  But this illustrates the risks inherent in being first to market with hyped-up new technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.