Who’s to Blame (Part 2): What is an “autonomous” weapon?
Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous” means in the weapons context. It is, after all, the presence of “autonomy” that will distinguish AWSs from earlier weapon technologies.
Most dictionary definitions of “autonomy” focus on the presence of free will or freedom of action. These are affirmative definitions, stating what autonomy is. Some dictionary definitions approach autonomy from a different angle, defining it not by the presence of freedom of action, but rather by the absence of external constraints on that freedom (e.g., “the state of existing or acting separately from others“). This latter approach is more useful in the context of weapon systems, since the existing literature on AWSs seems to use the term “autonomous” as referring to a weapon system’s ability to operate free from human influence and involvement.
Existing AWS commentaries seem to focus on three general methods by which humans can govern an AWS’s actions. This essay will refer to those methods as direction, monitoring, and control. A weapon system’s “autonomy” therefore refers to the degree to which the weapon system operates free from human direction, monitoring, and/or control.
Human direction, in this context, refers to the extent to which humans specify the parameters of an weapon system’s operation, from the initial design and programming of the system all the way to battlefield orders regarding the selection of targets and the timing and method of attack. Monitoring refers to the degree to which humans actively observe and collect information on a weapon system’s operations, whether through a live source such as a video feed or through regular reviews of data regarding a weapon system’s operations. And control is the degree to which humans can intervene in real time to change what a weapon system is currently doing, such as by actively controlling the system’s physical movement and combat functions or by shutting the machine down if the system malfunctions.
Existing commentaries on “autonomy” in weapon systems all seem to invoke at least one of these three concepts, though they may use different words to refer to those concepts.
The operation of modern military drones such as the MQ-1 Predator and MQ-9 Reaper illustrates how these concepts work in practice. A Predator or Reaper will not take off, select a target, or launch a missile without direct human input. Such drones thus are completely dependent on human direction. While a drone, like a commercial airliner on auto-pilot, may steer itself during non-mission-critical phases of flight, human operators closely monitor the drone throughout each mission both through live video feeds from cameras mounted on the drone and through flight data transmitted by the drone in real time. And, of course, humans directly (though remotely) control the drone during all mission-critical phases. Indeed, if the communications link that allows the human operator to control the drone fails, “the drone is programmed to fly autonomously in circles, or return to base, until the link can be reconnected.” The dominating presence of human direction, monitoring, and control mean that a drone is, in effect, “little more than a super-fancy remote-controlled plane.” The human-dependent nature of drones makes the task of piloting a drone highly stressful and labor-intensive–so much so that recruitment and retention of drone pilots has proven to be a major challenge for the U.S. Air Force. That, of course, is part of why militaries might be tempted to design and deploy weapon systems that can direct themselves and/or that do not require constant human monitoring or control.
Direction, monitoring, and control are very much interrelated, with monitoring and control being especially intertwined. During an active combat mission, human monitoring must be accompanied by human control (and vice versa) to act as an effective check on a weapon system’s operations. (For that reason, commentators often seem to combine monitoring and control into a single broader concept, such as “oversight” or, my preferred term, “supervision.“) Likewise, direction is closely related to control; an AWS could not be given new orders (i.e., direction) by a human commander if the AWS was not equipped with mechanisms allowing for human control of its operations. Such an AWS would only be human-directed in terms of its initial programming.
Particularly strong human direction can also reduce the need for monitoring and control, and vice versa. A weapon system that is subject to complete human direction in terms of the target, timing, and method of attack (and that has no ability to alter those parameters) has no more autonomy than fire-and-forget guided missiles, a technology that has been available for decades. And a weapon system subject to constant real-time human monitoring and control may have no more practical autonomy than the remotely piloted drones that are already in widespread military use.
Consequently, the strongest concerns relate to weapon systems that are “fully autonomous”–that is, weapon systems that can select and engage targets without specific orders from a human commander and operate without real-time human supervision. A 2015 Human Rights Watch (HRW) report, for instance, defines “fully autonomous weapons” as systems that lack meaningful human direction regarding the selection of targets and delivery of force and whose human supervision is so limited that humans are effectively “out-of-the-loop.” A directive issued by the United States Department of Defense (DoD) in 2009 similarly defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”
These sources also recognize the existence of weapon systems with lower levels of autonomy. The DoD directive covers “semi-autonomous weapons systems” that are “intended to only engage individual targets or specific target groups that have been selected by a human operator.” Such systems must be human-directed in terms of target selection, but could be largely free from human supervision and can even be self-directed with respect to the means and timing of attack. The same directive discusses “human-supervised” AWSs that, while capable of fully autonomous operation, are “designed to provide human operators with the ability to intervene and terminate engagements.” HRW similarly distinguishes fully autonomous weapons from those with a human “on the loop,” meaning AWSs that “can select targets and deliver force under the oversight of a human operator who can override the robots’ actions.”
In sum, “autonomy” in weapon systems refers to the degree to which the weapon system operates free from meaningful human direction, monitoring, and control. Weapon systems that operate without those human checks on their autonomy would raise unique legal issues if those systems’ operations lead to violations of international law. The next post in this series will examine why AWSs might violate the international laws governing armed conflict in the course of their operations in the absence of human direction and supervision.
Editor’s Note: This is the second entry in a weekly series of posts for the Future of Life Institute regarding the legal vacuum surrounding autonomous weapons. This entry is cross-posted on FLI’s website. The first entry can be found here. Other entries in this series cover why the deployment of AWSs could lead to violations of the laws of armed conflict, the accountability problem that autonomous weapons would present (including a deeper look at the problem of foreseeing what an AWS will do), and potential legal approaches to autonomous weapons.