Digital Analogues (part 5): Lessons from Animal Law, Continued


The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law.  Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.

So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation.  Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.

(Then again, even AI systems with a seemingly innocuous purpose could pose a risk to others if humans do not take appropriate safety precautions. As Russell and Norvig have written, “even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards.”)

So maybe the better approach would be to treat different classes of AI systems as akin to different species of animals. The classification of system as “wild” or “domesticated” could be based on the intended function of the system—perhaps an autonomous weapon system is inherently dangerous, but a computerized tennis coach is not. Or it could be based on the operational history of AI systems with similar software and hardware—if a particular type of AI system has a long proven track record of safe operation, it could be declared “domesticated” and their owners would be subject to relaxed standards of liability.

The most obvious difference between animals and AI systems is that animals do not have human designers and manufacturers.  In the AI context, I suspect legal systems will look to those designers and manufacturers–rather than the owners and operators of AI systems–as the main targets for liability when AI systems cause harm.

But the advent of machine learning learning means that the owners and operators of AI systems may have a stronger influence on how an AI system operates–just as an owner’s behavior toward and training of an animal will influence how that animal behaves.  In that sense, AI systems do have more in common with animals than they do with, say, a toaster.

In fact, AI systems already have a capacity to learn certain tasks even better than humans, as the success of AI systems at chess and GO demonstrates.  That will likely be true of an increasing number of fields going forward–I wouldn’t be surprised if AI systems 20 years from now make more reliable medical diagnoses than trained physicians. Unlike animals, AI systems will likely be working side-by-side with humans in many fields performing complex tasks.  In that sense, maybe a more appropriate analogue for AI is the way legal systems treat employees.  That will be subject of the next segment in this series.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.