Digital Analogues (Part 4): Is AI a Different Kind of Animal?

Source: David Shankbone


The last two entries in this series focused on the possibility of treating AI systems like “persons” in their own right.  As with corporations, these posts suggested, legal systems could develop a doctrine of artificial “personhood” for AI, through which AI systems would be given some of the legal rights and responsibilities that human beings have.  Of course, treating AI systems like people in the eyes of the law will be a bridge too far for many people both inside the legal world and in the public at large.  (If you doubt that, consider that corporate personhood is a concept that goes back to the Roman Empire’s legal system, and it still is highly controversial)

In the short-to-medium term, it is far more likely that instead of focusing on what rights and responsibilities an AI system should have, legal systems will instead focus on the responsibilities of the humans who have possession or control of such systems. From that perspective, the legal treatment of animals provides an interesting model.

» Read more

IBM’s Response to the Federal Government’s Request for Information on AI

IBM's Watson computing system is made up of electronically generated graphic compositions in which computer algorithms define the shape, texture and motion.The visual identity provides a peak at what a computer goes through as it responds to a Jeopardy! clue. Watson’s on-stage persona shares the graphic structure and tonality of the IBM Smarter Planet logo, a symbol of the company's effort to make the world work better.


As discussed in a prior post, the White House Office of Science and Technology Policy (OSTP) published a request for information (RFI) on AI back in June.  IBM released a response that was the subject of a very positive write-up on TechCrunch.  As the TechCrunch piece correctly notes, most of IBM’s responses were very informative and interesting.  They nicely summarize many of the key topics and concerns that are brought up regularly in the conferences I’ve attended.

But their coverage of the legal and governance implications of AI was disappointing.  Perhaps IBM was just being cautious because they don’t want to say anything that could invite closer government regulation or draw the attention of plaintiff’s lawyers, but their write-up on the subject was quite vague and somewhat off-topic.

» Read more

Could we be entering an AI-powered arms race in cyberwarfare?

Soon to be obsolete?


Much has been made about the possibility of AI-powered autonomous weapons becoming a factor in conventional warfare in the coming years.  But in the sphere of cyber-warfare, AI is already starting to play a major role, as laid out in an article in this week’s Christian Science Monitor.

Many nations–most notably Russia and China–already employ armies of hackers to conduct operations in the cybersphere against other countries.  The US Department of Defense’s response might be a harbinger of things to come:

[T]he allure of machines quickly fixing vulnerabilities has led the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s technology lab, to organize the first-ever hacking competition that pits automated supercomputers against each other at next month’s Black Hat cybersecurity conference in Las Vegas.

With the contest, DARPA is aiming to find new ways to quickly identify and eliminate software flaws that can be exploited by hackers, says DARPA program manager Mike Walker.

“We want to build autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process,” said Walker.

One of the big concerns about deploying autonomous weapon systems (AWSs) in the physical world is that it will lead to an arms race.  Starting in the Cold War, the development of more advanced missile defense systems spurred the development of more advanced missiles, which in turn led to the development of even more advanced missile defense systems, and so on.  It is easy to see how the same dynamic would play out with AWSs: because AWSs would be able to react on far shorter timescales than human soldiers, the technology may quickly reach a point where the only effective way to counter an enemy’s offensive AWS would be to deploy a defensive AWS, kickstarting a cycle of ever-more-advanced AWS development.

The fear with AWSs is that it might make human military decisionmaking obsolete, with human commanders unable to intervene quickly enough to meaningfully affect combat operations between AWSs.

The cyberwarfare arena might be a testing ground for that “AI arms race” theory.  If state-backed hackers respond to AI-powered cybersecurity systems by developing new AI-powered hacking technologies, what happens next might prove an ominous preview of what could happen someday in the world of physical warfare.

Digital Analogues (Part 3): If AI systems can be “persons,” what rights should they have?


The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

» Read more

On Robot-Delivered Bombs

A Northop Grumman Remotec Andros, a bomb-disposal robot similar to the one reportedly used by police to end the Dallas standoff.


“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night.  Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.”  Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one).  Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years.  As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

» Read more

The first casualty of vehicle automation

Tesla’s Autopilot


Without a doubt, the biggest thing in the news this week from a Law and AI perspective is that a Tesla Model S driver was killed while the vehicle had Tesla’s “Autopilot” feature activated. This is the first–or at least the first widely reported–fatality caused by a vehicle that, for all practical purposes, had an AI driver in control of the vehicle. The big question seems to be whether the deceased driver misused the Autopilot feature when he gave it unfettered control over the vehicle.

First rolled out by Tesla last year, Autopilot is probably the most advanced suite of self-driving technologies to date among automobiles available to consumers.  Autopilot was made available to drivers while it was still in its real-world testing or “beta” phase.  Making products that are in “beta” available to consumers while the kinks are still getting worked out is par for the course in the tech industry.  But in the auto industry?  Not so much.  In that world, it is a ballsy move to make a system that performs safety-critical functions available to drivers on the road while pretty much explicitly admitting that it has not yet been thoroughly tested.

» Read more

The challenge of diversity in the AI world


Let me start this post with a personal anecdote.  At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men.  Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic impact of AI included two presentations by women.  Those two women also participated in the panel discussion that immediately followed the presentations–except that “participated” might be a bit strong of a word, because the panel discussion essentially consisted of the two men on the panel arguing with each other for twenty minutes.

It gave off the uncomfortable impression (to me, at least) that even when women are seen in the AI world, it should be expected that they will immediately fade in the background once someone with a Y chromosome shows up. And the ethnic and racial diversity was scarcely better–I probably could count on one hand the number of people credentialed at the conference who were not either white or Asian.

Fast forward to this past week, when the White House’s Office of Science and Technology Policy released a request for information (RFI) on the promise and potential pitfalls of AI.  A Request for Information on AI doesn’t mean that the White House only heard about AI for the first time last week and is looking for someone to send them the link to relevant articles on Wikipedia.  Rather, a request for information issued by a governmental entity is a formal call for public comment on a particular topic that the entity wishes to examine more closely.

» Read more

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Source: Biotwist (via Deviant Art)


This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  It is cross-posted on the website of the Future of Life Institute.  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

» Read more

1 2 3 4