The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law. Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal. For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.
So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation. Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.