Could we be entering an AI-powered arms race in cyberwarfare?

Soon to be obsolete?


Much has been made about the possibility of AI-powered autonomous weapons becoming a factor in conventional warfare in the coming years.  But in the sphere of cyber-warfare, AI is already starting to play a major role, as laid out in an article in this week’s Christian Science Monitor.

Many nations–most notably Russia and China–already employ armies of hackers to conduct operations in the cybersphere against other countries.  The US Department of Defense’s response might be a harbinger of things to come:

[T]he allure of machines quickly fixing vulnerabilities has led the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s technology lab, to organize the first-ever hacking competition that pits automated supercomputers against each other at next month’s Black Hat cybersecurity conference in Las Vegas.

With the contest, DARPA is aiming to find new ways to quickly identify and eliminate software flaws that can be exploited by hackers, says DARPA program manager Mike Walker.

“We want to build autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process,” said Walker.

One of the big concerns about deploying autonomous weapon systems (AWSs) in the physical world is that it will lead to an arms race.  Starting in the Cold War, the development of more advanced missile defense systems spurred the development of more advanced missiles, which in turn led to the development of even more advanced missile defense systems, and so on.  It is easy to see how the same dynamic would play out with AWSs: because AWSs would be able to react on far shorter timescales than human soldiers, the technology may quickly reach a point where the only effective way to counter an enemy’s offensive AWS would be to deploy a defensive AWS, kickstarting a cycle of ever-more-advanced AWS development.

The fear with AWSs is that it might make human military decisionmaking obsolete, with human commanders unable to intervene quickly enough to meaningfully affect combat operations between AWSs.

The cyberwarfare arena might be a testing ground for that “AI arms race” theory.  If state-backed hackers respond to AI-powered cybersecurity systems by developing new AI-powered hacking technologies, what happens next might prove an ominous preview of what could happen someday in the world of physical warfare.

On Robot-Delivered Bombs

A Northop Grumman Remotec Andros, a bomb-disposal robot similar to the one reportedly used by police to end the Dallas standoff.


“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night.  Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.”  Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one).  Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years.  As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

» Read more

Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons

Dilbert

Source: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams


An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors. It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without violating the laws of war. An American Lt. General then orders that 50 of these units be deployed during a conflict in the Persian Gulf for use in ongoing urban combat in several cities. One of those units had previously seen action in urban combat in the Caucuses and desert combat during the same Persian Gulf conflict, all without incident. A Major makes the decision to deploy that AWS unit to assist a platoon engaged in block-to-block urban combat in Sana’a. Once the AWS unit is on the ground, a Lieutenant is responsible for telling the AWS where to go. The Lt. General, the Major, and the Lieutenant all had previous experience using this model of AWS and had given similar orders to these in prior combat situations without incident.

The Lieutenant has lost several men to enemy snipers over the past several weeks. He orders the AWS to accompany one of the squads under his command and preemptively strike any enemy sniper nests it detects–again, an order he had given to other AWS units before without incident. This time, the AWS unit misidentifies a nearby civilian house as containing a sniper nest, based on the fact that houses with similar features had frequently been used as sniper nests in the Caucuses conflict. It launches a RPG at the house. There are no snipers inside, but there are 10 civilians–all of whom are killed by the RPG. Human soldiers who had been fighting in the area would have known that that particular house likely did not contain a sniper’s nest because the glare from the sun off a nearby glass building reduces visibility on that side of the street at the times of day that American soldiers typically patrol the area–a fact that the human soldiers knew well from prior combat in the area, but a variable that the AWS had not been programmed to take into consideration.

In my most recent post for FLI on autonomous weapons, I noted that it would be difficult for humans to predict the actions of autonomous weapon systems (AWSs) programmed with machine learning capabilities.  If the military commanders responsible for deploying AWSs were unable to reliably foresee how the AWS would operate on the battlefield, it would be difficult to hold those commanders responsible if the AWS violates the law of armed conflict (LOAC).  And in the absence of command responsibility, it is not clear whether any human could be held responsible under the existing LOAC framework.

A side comment from a lawyer on Reddit made me realize that my reference to “foreseeability” requires a bit more explanation.  “Foreseeability” is one of those terms that makes lawyers’ ears perk up when they hear it because it’s a concept that every American law student encounters when learning the principles of negligence in their first-year class on Tort Law.

» Read more

Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?

accountability-joke


The previous entry in this series examined why it would be very difficult to ensure that autonomous weapon systems (AWSs) consistently comply with the laws of war.  So what would happen if an attack by an AWS resulted in the needless death of civilians or otherwise constituted a violation of the laws of war?  Who would be held legally responsible?

In that regard, AWSs’ ability to operate free of human direction, monitoring, and control would raise legal concerns not shared by drones and other earlier generations of military technology.  It is not clear who, if anyone, could be held accountable if and when AWS attacks result in illegal harm to civilians and their property.  This “accountability gap” was the focus of a 2015 Human Rights Watch report.  The HRW report ultimately concluded that there was no plausible way to resolve the accountability issue and therefore called for a complete ban on fully autonomous weapons.

Although some commentators have taken issue with this prescription, the diagnosis seems to be correct—it simply is not clear who could be held responsible if an AWS commits an illegal act.  This accountability gap exists because AWSs incorporate AI technology could collect information and determine courses of action based on the conditions in which they operate.  It is unlikely that even the most careful human programmers could predict the nearly infinite on-the-ground circumstances that an AWS could face.  It would therefore be difficult for an AWS designer–to say nothing of its military operators–to foresee how the AWS would react in the fluid, fast-changing world of combat operations.  The inability to foresee an AWS’s actions would complicate the assignment of legal responsibility.

» Read more

On subjectivity, AI systems, and legal decision-making

Source: Dilbert


The latest entry in my series of posts on autonomous weapon systems (AWSs) suggested that it would be exceedingly difficult to ensure that AWSs complied with the laws of war.  A key reason for this difficulty is that the laws of war depend heavily on subjective determinations.  One might easily expand this point and argue that AI systems cannot–or should not–make any decisions that require interpreting or applying law because such legal determinations are inherently subjective.

Ever the former judicial clerk, I can’t resist pausing for a moment to define my terms.  “Subjective” can have subtly different meanings depending on the context.  Here, I’m using the term to mean something that is a matter of opinion rather than a matter of fact.  In law, I would say that identifying what words are used in the Second Amendment is an objective matter; discerning what those words mean is a subjective matter.  All nine justices who decided DC v. Heller (and indeed, anyone with access to an accurate copy of the Bill of Rights) agreed that the Second Amendment reads: “A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.”  They disagreed quite sharply about what those words mean and how they relate to each other.  (Legal experts even disagree on what the commas in the Second Amendment mean).

Given that definition of “subjective,” here are some observations.

» Read more

Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict?

“Robots won’t commit war crimes. We just have to program them to follow the laws of war.” This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers. But designing such autonomous weapon systems (AWSs) is far easier said than done. True, if we could design and program AWSs that always obeyed the international law of armed conflict (LOAC), then the issues raised in the previous segment of this series — which suggested the need for human direction, monitoring, and control of AWSs — would be completely unfounded. But even if such programming prowess is possible, it seems unlikely to be achieved anytime soon. Instead, we need be prepared for powerful AWS that may not recognize where the lines blur between what is legal and reasonable during combat and what is not.

While the basic LOAC principles seem straightforward at first glance, their application in any given military situation depends heavily on the specific circumstances in which combat takes place. And the difference between legal and illegal acts can be blurry and subjective. It therefore would be difficult to reduce the laws and principles of armed conflict into a definite and programmable form that could be encoded into the AWS and, from which the AWS could consistently make battlefield decisions that comply with the laws of war.

» Read more

Who’s to Blame (Part 2): What is an “autonomous” weapon?

Source: Peanuts by Charles Schulz Via @GoComics

Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context.  It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.

Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action.  These are affirmative definitions, stating what autonomy is.  Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others).  This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.

Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions.  This essay will refer to those methods as direction, monitoring, and control.  A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.

Human direction, in this context, refers to the extent to which humans specify the parameters of an weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack.  Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations.  And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting the machine down if the system malfunctions.

» Read more