Could we be entering an AI-powered arms race in cyberwarfare?

Soon to be obsolete?


Much has been made about the possibility of AI-powered autonomous weapons becoming a factor in conventional warfare in the coming years.  But in the sphere of cyber-warfare, AI is already starting to play a major role, as laid out in an article in this week’s Christian Science Monitor.

Many nations–most notably Russia and China–already employ armies of hackers to conduct operations in the cybersphere against other countries.  The US Department of Defense’s response might be a harbinger of things to come:

[T]he allure of machines quickly fixing vulnerabilities has led the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s technology lab, to organize the first-ever hacking competition that pits automated supercomputers against each other at next month’s Black Hat cybersecurity conference in Las Vegas.

With the contest, DARPA is aiming to find new ways to quickly identify and eliminate software flaws that can be exploited by hackers, says DARPA program manager Mike Walker.

“We want to build autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process,” said Walker.

One of the big concerns about deploying autonomous weapon systems (AWSs) in the physical world is that it will lead to an arms race.  Starting in the Cold War, the development of more advanced missile defense systems spurred the development of more advanced missiles, which in turn led to the development of even more advanced missile defense systems, and so on.  It is easy to see how the same dynamic would play out with AWSs: because AWSs would be able to react on far shorter timescales than human soldiers, the technology may quickly reach a point where the only effective way to counter an enemy’s offensive AWS would be to deploy a defensive AWS, kickstarting a cycle of ever-more-advanced AWS development.

The fear with AWSs is that it might make human military decisionmaking obsolete, with human commanders unable to intervene quickly enough to meaningfully affect combat operations between AWSs.

The cyberwarfare arena might be a testing ground for that “AI arms race” theory.  If state-backed hackers respond to AI-powered cybersecurity systems by developing new AI-powered hacking technologies, what happens next might prove an ominous preview of what could happen someday in the world of physical warfare.

Digital Analogues (Part 3): If AI systems can be “persons,” what rights should they have?


The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

» Read more

On Robot-Delivered Bombs

A Northop Grumman Remotec Andros, a bomb-disposal robot similar to the one reportedly used by police to end the Dallas standoff.


“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night.  Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.”  Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one).  Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years.  As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

» Read more

The first casualty of vehicle automation

Tesla’s Autopilot


Without a doubt, the biggest thing in the news this week from a Law and AI perspective is that a Tesla Model S driver was killed while the vehicle had Tesla’s “Autopilot” feature activated. This is the first–or at least the first widely reported–fatality caused by a vehicle that, for all practical purposes, had an AI driver in control of the vehicle. The big question seems to be whether the deceased driver misused the Autopilot feature when he gave it unfettered control over the vehicle.

First rolled out by Tesla last year, Autopilot is probably the most advanced suite of self-driving technologies to date among automobiles available to consumers.  Autopilot was made available to drivers while it was still in its real-world testing or “beta” phase.  Making products that are in “beta” available to consumers while the kinks are still getting worked out is par for the course in the tech industry.  But in the auto industry?  Not so much.  In that world, it is a ballsy move to make a system that performs safety-critical functions available to drivers on the road while pretty much explicitly admitting that it has not yet been thoroughly tested.

» Read more

The challenge of diversity in the AI world


Let me start this post with a personal anecdote.  At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men.  Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic impact of AI included two presentations by women.  Those two women also participated in the panel discussion that immediately followed the presentations–except that “participated” might be a bit strong of a word, because the panel discussion essentially consisted of the two men on the panel arguing with each other for twenty minutes.

It gave off the uncomfortable impression (to me, at least) that even when women are seen in the AI world, it should be expected that they will immediately fade in the background once someone with a Y chromosome shows up. And the ethnic and racial diversity was scarcely better–I probably could count on one hand the number of people credentialed at the conference who were not either white or Asian.

Fast forward to this past week, when the White House’s Office of Science and Technology Policy released a request for information (RFI) on the promise and potential pitfalls of AI.  A Request for Information on AI doesn’t mean that the White House only heard about AI for the first time last week and is looking for someone to send them the link to relevant articles on Wikipedia.  Rather, a request for information issued by a governmental entity is a formal call for public comment on a particular topic that the entity wishes to examine more closely.

» Read more

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Source: Biotwist (via Deviant Art)


This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  It is cross-posted on the website of the Future of Life Institute.  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

» Read more

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  It is also cross-posted on the website of the Future of Life Institute.  Full credit to Corey Pressman for the title.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

» Read more

Doctors and Lawyers: There’s an AI app for that (but not really)

Source: twentysomethinglawyer.wordpress.com

Over the past few weeks, stories have broken suggesting that AI is breaking through into two of the world’s most venerable professions: law and medicine. A couple weeks ago, stories reported that a major law firm had hired an AI-based “lawyer,” and the Daily Mail ran a story this weekend on a new health app called Check, declaring: “It’s man versus robot in the battle of the doctors: World’s first ‘artificial intelligence’ medic set to be pitted against the real thing in landmark experiment for medicine.”  As always, the media headlines make these technologies sound much more impressive than they actually are.  Both of these technologies sound like more convenient versions of existing tools that doctors, lawyers, and non-professionals alike already use on a daily basis.

» Read more

Notes from the 2016 Governance of Emerging Technologies Conference

Source: Frank Cotham/The New Yorker


This past week, I attended the fourth annual Governance of Emerging Technologies Conference at Arizona State’s Sandra Day O’Connor School of Law.  The symposium’s format included a number of sessions that ran concurrently, so I ended up having to miss several presentations that I wanted to see.  But the ones I did manage to catch were very informative.  Here are some thoughts.

The conference was a sobering reminder of why AI is not a major topic on the agenda of governments and international organizations around the world: there are a whole lot of emerging technologies posing new ethical questions and creating new sources of risk.  Nanotechnology, bioengineering, and the “Internet of Things” all are raising new issues that policymakers must analyze.  To make matters worse, governments the world over are not even acting with the necessary urgency on comparatively longstanding sources of catastrophic risk such as climate change, global financial security, political and social instability in the Middle East, and both civil and military nuclear security.  So it shouldn’t be surprising that AI is not at the top of the agenda in Washington, Brussels, Beijing, or anywhere else outside Silicon Valley, and there is no obvious way to make AI-writ-large a higher policy priority in the immediate future without engaging in disingenuous scaremongering.

» Read more

1 2 3 4 5