Could autonomous vehicles put personal injury lawyers out of business?

A couple weeks ago, CNBC published a thought-provoking commentary by Vasant Dhar on autonomous vehicles (AVs).  Much of the hype surrounding AVs has focused on their ability to prevent accidents.  This makes sense given that, as Dhar notes, “more than 90 percent of accidents result from human impairment, such as drunk driving or road rage, errant pedestrians, or just plain bad driving.”

But Dhar also points out a somewhat less obvious but equally probable consequence of the rise of AVs: Because the sensors and onboard systems in AVs will collect and record massive amounts of data, AVs will greatly simplify the assignment of liability when accidents occur.  This could eliminate the raison d’etre for no-fault insurance and liability, which continues to be the rule in many states and provinces in the U.S. and Canada.  It would also reduce the uncertainty that leads to costly litigation in jurisdictions where the assignment of liability for a car accident depends on who is found to be at fault.  As Dhar explains:

Big data from onboard systems changes everything because we now have the ability to know the physics associated with accidents . . . .

The ever-increasing numbers of sensors on roads and vehicles move us towards a world of complete information where causes of accidents will be determined more reliably and fault easier to establish. With the detail and transparency that big data provides, no fault accidents will not be an option.

This may strike panic in the hearts of personal injury attorneys, but it would be good news for pretty much everyone else.

In the long-run, increasing automation and the introduction of more sophisticated vehicle sensor systems will also have positive downstream effects. As Dhar notes, “the massive increase in data collection from vehicles . . . will happen irrespective of whether vehicles are ever fully autonomous” as manufacturers continue to load more sophisticated sensors and systems on human-driven cars.  This data “could be used to design incentives and reward desirable driving practices in the emerging hybrid world of human and driverless vehicles. In other words, better data could induce better driving practices and lead to safer transportation with significantly lower insurance and overall costs to society.”

You can read Dhar’s full commentary here.


Drone Law Today podcast: Where Artificial Intelligence Goes from Here

I had the pleasure of doing a podcast interview with Steve Hogan of Drone Law Today last week, and we had a fascinating, wide-ranging discussion on the future of artificial intelligence.  The podcast episode is now available on Drone Law Today‘s website.  If you haven’t heard the podcast before, check it out and subscribe–there are not a lot of regularly updated resources out there for people interested in the intersection of law and emerging technologies, and Steve’s podcast is a great one.

California takes AI marketing off Autopilot


California issued new draft regulations this week regarding the marketing of autonomous vehicles.  It included a not-very-subtle dig at Tesla’s much-ballyhooed “Autopilot” system.

As background, the new California regulations specify in greater detail than before what type of technology counts as “autonomous” in the context of cars. Specifically, it says that an “autonomous vehicle” must qualify as Level 3, Level 4, or Level 5 under the Society of Automotive Engineers (SAE) framework.  That framework classifies vehicles along a 0 to 5 scale, with 5 being fully autonomous and 0 being a standard, fully human-controlled car.

The kicker came in the very last section of the draft regulations:

§ 227.90(a) No vehicle shall be advertised as an autonomous vehicle unless it [is Level 3, Level 4, or Level 5].


(b) Terms such as “self-driving”, “automated”, “auto-pilot”, or other statements made that are likely to induce a reasonably prudent person to believe a vehicle is autonomous, as defined, constitute an advertisement that the vehicle is autonomous for the purposes of this section and Vehicle Code section 11713.

Under this draft regulation, the problem with Tesla’s Autopilot technology will be that while it is several leaps ahead of everything else in the consumer market right now, it does not make a vehicle truly autonomous.  Rather, Autopilot is a limits-pushing example of what SAE classified as “Level 2” automation, which still depends on the human driver to actively monitor the vehicle’s surroundings.  To be Level 3, the vehicle must be capable of monitoring its surrounding on its own, and the human driver need only be ready to take the wheel if the vehicle’s computer specifically alerts the driver to do so.

Presumably, the concern that drove this new regulation is that an ordinary Jane or Joe might hear terms like “Autopilot” and think that the vehicle is truly autonomous.  After all, the autopilots on modern airliners are generally at least Level 3 (although thankfully, regulations require pilots to treat their planes like they are at most Level 2).*  So the average person might reasonably assume that a car with a system called “Autopilot” does not require a driver to be paying full attention to what’s going on outside the car.

Now, whether that assumption was reasonable in the case of Tesla’s Autopilot is a separate question.  Tesla is going to argue that they told drivers every which way that they still needed to pay attention–and, indeed, have their hands actually on the steering wheel–even if Autopilot is active.  California, however, has decided that that’s not enough, and that the name “Autopilot” just carries too strong a suggestion for it to be used unless the vehicle is really capable of operating autonomously.

I’d say “I called it,” but really every lawyer called it this spring, when news broke about the death of a Tesla driver who was using Autopilot as, well, an autopilot.  The lesson is that with emerging technologies in safety-critical areas such as transportation, it’s wise to have a lawyer in the room when you’re developing a branding and marketing strategy.

There’s certainly plenty more to talk about on the autonomous vehicle front that I haven’t covered over the past few weeks–perhaps most notably, the NHTSA guidelines released last month on autonomous vehicles.  Stay tuned, as the pace will be picking up around here as autumn kicks into full gear.

* Side note: A great article in the Washington Post earlier this year discussed how the increasing capabilities of airline autopilots might be dulling the piloting skills of human pilots–which is a real problem when planes encounter crises the automated systems are not designed to handle.  I suspect the same thing will happen to drivers on the ground as vehicles become more automated, reducing the frequency with which human drivers need to use and hone their driving skills.

Law and AI Quick Hits: September 26-30, 2016

Credit: Charles Schulz

Credit: Charles Schulz

A short round up of recent news of interest to Law and AI.

In the Financial Times, John Thornhill writes on “the darker side of AI if left unmanaged: the impact on jobs, inequality ethics, privacy and democratic expression.”  Thornhill takes several proverbial pages from the Stanford 100-year study on AI, but does not ultimately offer his view of what effective AI “management” might look like.

Patrick Tucker writes in Defense One that a survey funded by the Future of Life Institute found “that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.”  In the process, he gives voice to the fears held by many people (well, at least by me) of how an autonomous weapons arms race might play out:

Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.

Microsoft CEO Satya Nadella sat down for an interview with Dave Gershgorn of Quartz.  Among other things, Nadella discusses the lessons Microsoft learned from Tay the Racist Chatbot–namely the need to build “resiliency” into learning AI systems to protect them from threats that might cause them to “learn” bad things.  In the case of Tay, Microsoft failed to make the chatbot resilient to trolls, with results that were at once amusing and troubling.

The Partnership on AI: A step in the right direction


Well, by far the biggest AI news story to hit the papers this week was the announcement that a collection of tech industry heavyweights–Microsoft, IBM, Amazon, Facebook, and Google–are joining forces to form a “Partnership on AI”:

The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.


“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”

There’s no question this is welcome news.  Each of the five companies who formed this group had been part of the “AI arms race” that has played out over the past few years, when major tech companies have invested massive amounts of money in expanding their AI research, both by acquiring other companies and by recruiting talent.  To a mostly-outside observer such as myself, it seemed for a time like the arms race was becoming an end unto itself–companies were making huge investments in AI without thinking about the long-term implications of AI development.  The Partnership is a good sign that the titans of tech are, indeed, seeing the bigger picture.

» Read more

A peek at how AI could inadvertently reinforce discriminatory policies

Source: Before It's News

Source: Before It’s News

The most interesting story that came up during Law and AI’s little hiatus came from decidedly outside the usual topics covered here–the world of beauty pageants.  Well, sort of:

An online beauty contest called, run by Youth Laboratories . . . ., solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age.

Sounds harmless enough, aside from the whole “we’re teaching computers to objectify women” aspect.  But the results of this contest carry some troubling implications.

Of the 44 winners in the pageant, 36 (or 82%) were white.  In other words, white people were disproportionately represented among the pageant’s “winners.”  This couldn’t help but remind me of discrimination law in the legal world.  The algorithm’s beauty assessments had what lawyers would recognize as a disparate impact–that is, despite the fact that the algorithm seemed objective and non-discriminatory at first glance, it ultimately favored whites at the expense of other racial groups.

The concept of disparate impact is best known in employment law and in college admissions, where a company or college can be liable for discrimination if its policies have a disproportionate negative impact on protected groups, even if the people who came up with the policy had no discriminatory intent. For example, a hypothetical engineering company might select which applicants to interview for a set of open job positions by coming up with a formula that awards 1 point to an applicant with a college degree in engineering, 3 years for a Master’s degree, and 6 points for a doctorate, and additional points for certain prestigious fellowships.  Facially, this system appears neutral in terms of race, gender, and socioeconomic status.  But in its outcomes, it may (and probably would) end up having a disparate impact if the components of the test score are things that wealthy white men are disproportionately more likely to have due to their social and economic advantages.

The easiest way to get around this problem might be to use a quota–i.e., set aside a certain proportion of the positions for applicants from underserved minority groups and then apply the ‘objective’ test to rate applicants within each group.  But such overt quotas are also illegal (according to the Supreme Court) because they constitute disparate treatment.  What about awarding “bonus points” under the objective test to people from disadvantaged groups?  Well, that would also be disparate treatment.  Certainly, nothing prevents an employer from using race as, to borrow a phrase from Equal Protection law, a subjective “plus factor” to help ensure diversity.  But you can’t assign a specific number related to the race or gender of applicants.  The bottom line is that the law likes to keep assessments very subjective when they involve sensitive personal characteristics such as race and gender.

Which brings us back to AI.  You can have an algorithm that approximates or simulates a subjective assessment, but you still have to find a way to program that assessment into the AI–which means reducing the subjective assessment to an objective and concrete form. It would be difficult-to-impossible to program a truly subjective set of criteria into an AI system because a subjective algorithm is almost a contradiction in terms.

Fortunately for, it can probably solve its particular “disparate impact” problem without having the algorithm discriminate based on race.  The reason why generated a disproportionate number of white winners is that the data sets (i.e. images of people) that were used to build the AI’s ‘objective’ algorithm for assessing beauty consisted primarily of images of white people.

As a result, the algorithm’s accuracy dropped when it runs into the images of people who don’t fit the patterns in the data set that was used to prime the algorithm.  To fix that, the humans just need to include a more diverse data set–and since humans are doing that bit, the process of choosing who is included in the original data set could be subjective, even if the algorithm that uses the data set cannot be.

For various reasons, however, it would be difficult to replicate that process in the contexts of employment, college admissions, and other socially and economically vital spheres.  I’ll be exploring this topic in greater detail in a forthcoming law practice article that should be appearing this winter.  Stay tuned!

Should AI systems in mental health settings have a duty to warn?

A brief item that I could not resist leaving a quick comment on.  The Atlantic posted a fascinating story last week on a machine learning program that could help make more accurate psychiatric diagnoses.  The system currently in place is a “schizophrenia screener” that analyzes primary care patients’ speech patterns for some of the tell-tale verbal ‘tics’ that can be a predictor of psychosis.  For now, as the author points out, there are many weaknesses with widespread deployment of such a system because there are so many cultural, ethnic, and other differences in speech and behavior that could throw the system off.  But still, the prospect of an AI system playing a role in determining whether a person has a mental disorder raises some intriguing questions.

The lawyer in me immediately thought “could the Tarasoff rule apply to AI systems?”  For those of you who are normal, well-adjusted human beings (i.e., not lawyers), Tarasoff was a case where the California Supreme Court held that a psychiatrist could be held liable if the psychiatrist knows that a patient under his or her care poses a physical danger to someone and fails to take protective measures (e.g., by calling the police or warning the potential victim(s)).

Now granted, predicting violence is probably a much more difficult task than determining whether someone has a specific mental disorder.  But it’s certainly not out of the realm of possibility that a psychiatric AI system could be designed that analyzes a patient’s history, the tone and content of a patient’s speech, etc, and comes up with a probability that the patient will commit a violent act in the near future.

Let’s say that such a violence-predicting AI system is designed for use in medical and psychiatric settings.  The system is programmed to report to a psychiatrist when it determines that the probability of violence is above a certain threshold–say 40%.  The designers set up the system so that once makes its report, its job is done; it’s ultimately up to the psychiatrist to determine whether a real threat of violence exists and, if so, what protective measures to take.

But let’s say that the AI system determines that there is a 95% probability of violence, and that studies have shown that the system does better than even experienced human psychiatrists in predicting violence. Should the system still be designed so it can do nothing except report the probability of violence to a psychiatrist, despite the risk that the psychiatrist may not take appropriate action?  Or should AI systems have a freestanding Tarasoff-like duty to warn police?

Given that psychiatry is one of the more subjective fields of medicine, it will be interesting to see how the integration of AI in the mental health sector plays out.  If AI systems prove to be, on average, better than humans at making psychiatric diagnoses and assessing risks of violence, would we still want a human psychiatrist to have the final say–even though it might mean worse decisions on balance?  I have a feeling we’ll have to confront that question some day.

Digital Analogues (part 5): Lessons from Animal Law, Continued

The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law.  Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.

So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation.  Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.

» Read more

Digital Analogues (Part 4): Is AI a Different Kind of Animal?

Source: David Shankbone

The last two entries in this series focused on the possibility of treating AI systems like “persons” in their own right.  As with corporations, these posts suggested, legal systems could develop a doctrine of artificial “personhood” for AI, through which AI systems would be given some of the legal rights and responsibilities that human beings have.  Of course, treating AI systems like people in the eyes of the law will be a bridge too far for many people both inside the legal world and in the public at large.  (If you doubt that, consider that corporate personhood is a concept that goes back to the Roman Empire’s legal system, and it still is highly controversial)

In the short-to-medium term, it is far more likely that instead of focusing on what rights and responsibilities an AI system should have, legal systems will instead focus on the responsibilities of the humans who have possession or control of such systems. From that perspective, the legal treatment of animals provides an interesting model.

» Read more

IBM’s Response to the Federal Government’s Request for Information on AI

IBM's Watson computing system is made up of electronically generated graphic compositions in which computer algorithms define the shape, texture and motion.The visual identity provides a peak at what a computer goes through as it responds to a Jeopardy! clue. Watson’s on-stage persona shares the graphic structure and tonality of the IBM Smarter Planet logo, a symbol of the company's effort to make the world work better.

As discussed in a prior post, the White House Office of Science and Technology Policy (OSTP) published a request for information (RFI) on AI back in June.  IBM released a response that was the subject of a very positive write-up on TechCrunch.  As the TechCrunch piece correctly notes, most of IBM’s responses were very informative and interesting.  They nicely summarize many of the key topics and concerns that are brought up regularly in the conferences I’ve attended.

But their coverage of the legal and governance implications of AI was disappointing.  Perhaps IBM was just being cautious because they don’t want to say anything that could invite closer government regulation or draw the attention of plaintiff’s lawyers, but their write-up on the subject was quite vague and somewhat off-topic.

» Read more

1 2 3 5