On Robot-Delivered Bombs
“In An Apparent First, Police Used A Robot To Kill.” So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night. Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff. As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was. Other options would have exposed our officers to grave danger.”
The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb. The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.” Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one). Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years. As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.
Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges. Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself. I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle. The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.
That is not to say, however, that there are no ethical challenges with police delivering lethal force remotely. As with aerial drones, a bomb disposal robot can deliver lethal force without placing the humans making the decision to kill in any physical risk. The absence of risk creates a danger that the technology will be overused.
That issue has already been widely discussed in the context of military drones. In military settings, commanders think carefully before ordering pilots to fly into combat zones to conduct air strikes, because they know it will place those pilots at risk. They presumably have less hesitation about ordering air strikes using drones, which would not place any of the men and women under their command in harm’s way. That absence of physical risk may make the decision to use lethal force easier, as explained in a 2014 report produced by the Stimson Center on US Drone Policy:
The increasing use of lethal UAVs may create a slippery slope leading to continual or wider wars. The seemingly low-risk and low-cost missions enabled by UAV technologies may encourage the United States to fly such missions more often, pursuing targets with UAVs that would be deemed not worth pursuing if manned aircraft or special operation forces had to be put at risk. For similar reasons, however, adversarial states may be quicker to use force against American UAVs than against US manned aircraft or military personnel. UAVs also create an escalation risk insofar as they may lower the bar to enter a conflict, without increasing the likelihood of a satisfactory outcome.
The same concerns apply to police use of robots to deliver lethal force in civilian settings. The exceptional danger that police faced in the Dallas standoff makes the use of robot-delivered force in that particular situation seem fairly reasonable. But the concern is that police will be increasingly tempted to use the technology in less exceptional situations. As Ryan Calo said in the NPR story, “the time to get nervous about police use of robots isn’t in extreme, anomalous situations with few good options like Dallas, but if their use should become routine.” The danger is that the low-risk nature of robot-delivered weapons makes it more likely that their use will become routine.
Of course, there is another side to that coin. Human police officers facing physical danger, or even believing that they are facing such danger, may panic or overreact. They may, out of a sense of self-preservation, use lethal force in situations where it is not warranted. That may well have been what happened in the shooting of Philando Castile, one of the two relatively young black men whose tragic and unnecessary deaths at the hands of police apparently drove Micah Xavier Johnson to open fire on Dallas police officers. A police officer controlling a drone or similar device from the safely of a control room will feel no similar compulsion to use lethal force for reasons of self-preservation.
Legally, I think that the bottom line should be this: police departments’ policies on the use of lethal force should be the same regardless of whether that force is delivered personally or remotely. Many departments’ policies and standards have been under increased scrutiny due to the high-profile police shootings of the past few years, but the gist of those policies is still almost always some variation of: “police officers are not allowed to use lethal force unless they reasonably believe that the use of such force is necessary to prevent death or serious injury to the officer or a member of the public.”
I think that standard was met in Dallas. And who knows? Since the decision to use a robot-delivered bomb came about only because of the unique nature of the Dallas standoff, it’s possible that we won’t see another similar use of robots by police for years to come. But if such an incident does happen again, we may look back on the grisly and dramatic end to the Dallas standoff as a turning point.