Should AI systems in mental health settings have a duty to warn?


A brief item that I could not resist leaving a quick comment on.  The Atlantic posted a fascinating story last week on a machine learning program that could help make more accurate psychiatric diagnoses.  The system currently in place is a “schizophrenia screener” that analyzes primary care patients’ speech patterns for some of the tell-tale verbal ‘tics’ that can be a predictor of psychosis.  For now, as the author points out, there are many weaknesses with widespread deployment of such a system because there are so many cultural, ethnic, and other differences in speech and behavior that could throw the system off.  But still, the prospect of an AI system playing a role in determining whether a person has a mental disorder raises some intriguing questions.

The lawyer in me immediately thought “could the Tarasoff rule apply to AI systems?”  For those of you who are normal, well-adjusted human beings (i.e., not lawyers), Tarasoff was a case where the California Supreme Court held that a psychiatrist could be held liable if the psychiatrist knows that a patient under his or her care poses a physical danger to someone and fails to take protective measures (e.g., by calling the police or warning the potential victim(s)).

Now granted, predicting violence is probably a much more difficult task than determining whether someone has a specific mental disorder.  But it’s certainly not out of the realm of possibility that a psychiatric AI system could be designed that analyzes a patient’s history, the tone and content of a patient’s speech, etc, and comes up with a probability that the patient will commit a violent act in the near future.

Let’s say that such a violence-predicting AI system is designed for use in medical and psychiatric settings.  The system is programmed to report to a psychiatrist when it determines that the probability of violence is above a certain threshold–say 40%.  The designers set up the system so that once makes its report, its job is done; it’s ultimately up to the psychiatrist to determine whether a real threat of violence exists and, if so, what protective measures to take.

But let’s say that the AI system determines that there is a 95% probability of violence, and that studies have shown that the system does better than even experienced human psychiatrists in predicting violence. Should the system still be designed so it can do nothing except report the probability of violence to a psychiatrist, despite the risk that the psychiatrist may not take appropriate action?  Or should AI systems have a freestanding Tarasoff-like duty to warn police?

Given that psychiatry is one of the more subjective fields of medicine, it will be interesting to see how the integration of AI in the mental health sector plays out.  If AI systems prove to be, on average, better than humans at making psychiatric diagnoses and assessing risks of violence, would we still want a human psychiatrist to have the final say–even though it might mean worse decisions on balance?  I have a feeling we’ll have to confront that question some day.

Digital Analogues (part 5): Lessons from Animal Law, Continued


The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law.  Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.

So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation.  Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.

» Read more

Digital Analogues (Part 4): Is AI a Different Kind of Animal?

Source: David Shankbone


The last two entries in this series focused on the possibility of treating AI systems like “persons” in their own right.  As with corporations, these posts suggested, legal systems could develop a doctrine of artificial “personhood” for AI, through which AI systems would be given some of the legal rights and responsibilities that human beings have.  Of course, treating AI systems like people in the eyes of the law will be a bridge too far for many people both inside the legal world and in the public at large.  (If you doubt that, consider that corporate personhood is a concept that goes back to the Roman Empire’s legal system, and it still is highly controversial)

In the short-to-medium term, it is far more likely that instead of focusing on what rights and responsibilities an AI system should have, legal systems will instead focus on the responsibilities of the humans who have possession or control of such systems. From that perspective, the legal treatment of animals provides an interesting model.

» Read more

IBM’s Response to the Federal Government’s Request for Information on AI

IBM's Watson computing system is made up of electronically generated graphic compositions in which computer algorithms define the shape, texture and motion.The visual identity provides a peak at what a computer goes through as it responds to a Jeopardy! clue. Watson’s on-stage persona shares the graphic structure and tonality of the IBM Smarter Planet logo, a symbol of the company's effort to make the world work better.


As discussed in a prior post, the White House Office of Science and Technology Policy (OSTP) published a request for information (RFI) on AI back in June.  IBM released a response that was the subject of a very positive write-up on TechCrunch.  As the TechCrunch piece correctly notes, most of IBM’s responses were very informative and interesting.  They nicely summarize many of the key topics and concerns that are brought up regularly in the conferences I’ve attended.

But their coverage of the legal and governance implications of AI was disappointing.  Perhaps IBM was just being cautious because they don’t want to say anything that could invite closer government regulation or draw the attention of plaintiff’s lawyers, but their write-up on the subject was quite vague and somewhat off-topic.

» Read more