Digital Analogues (part 5): Lessons from Animal Law, Continued


The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law.  Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.

So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation.  Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.

» Read more

Digital Analogues (Part 4): Is AI a Different Kind of Animal?

Source: David Shankbone


The last two entries in this series focused on the possibility of treating AI systems like “persons” in their own right.  As with corporations, these posts suggested, legal systems could develop a doctrine of artificial “personhood” for AI, through which AI systems would be given some of the legal rights and responsibilities that human beings have.  Of course, treating AI systems like people in the eyes of the law will be a bridge too far for many people both inside the legal world and in the public at large.  (If you doubt that, consider that corporate personhood is a concept that goes back to the Roman Empire’s legal system, and it still is highly controversial)

In the short-to-medium term, it is far more likely that instead of focusing on what rights and responsibilities an AI system should have, legal systems will instead focus on the responsibilities of the humans who have possession or control of such systems. From that perspective, the legal treatment of animals provides an interesting model.

» Read more

Digital Analogues (Part 3): If AI systems can be “persons,” what rights should they have?


The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

» Read more

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Source: Biotwist (via Deviant Art)


This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  It is cross-posted on the website of the Future of Life Institute.  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

» Read more

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  It is also cross-posted on the website of the Future of Life Institute.  Full credit to Corey Pressman for the title.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

» Read more