Destroying Hezbollah’s missile cache: A proportionality case study and its implications for autonomous weapons
The concept of proportionality is central to the Law of Armed Conflict (LOAC), which governs the circumstances under which lethal military attacks can be launched under international law. Proportionality in this context means that the harm done to civilians and civilian property in a given attack must not be excessive in light of the military advantage expected to be gained by an attack. Conceptually, proportionality is supposed to evoke something resembling the scales of justice; if the “weight” of the civilian harm exceeds the “weight” of the military advantage, then an attack must not be launched. But, of course, proportionality determinations are highly subjective. The value of civilian property might be easy enough to determine, but there is no easy or obvious way to quantify the “value” of human lives or objects and buildings of religious or historical (as opposed to economic) significance. Similarly, “military advantage” is not something that can easily be quantified, and there certainly is no accepted method of “comparing” expected military advantage to the value of civilian lives.
Consider this opinion piece by Amitai Etzioni. One of the greatest threats to Israel’s security comes from Hezbollah, a Lebanese Shi’a political party and paramilitary force that has carried out numerous terrorist attacks against Israel. Hezbollah has a cache of 100,000 missiles and rockets, many-to-most of which it no doubt would launch into Israel if hostilities between Israel and Hezbollah were to rekindle. But since most of the missiles are located in private civilian homes, Etzioni asks: “If Hezbollah starts raining them down on Israel, how can these missiles be eliminated without causing massive civilian casualties?”
The article raises two possibilities. Option A, the approach ostensibly touted by Israel’s military, is to have “Israeli soldiers dash from building to building to find” the missiles:
True to the tactics of urban warfare, the IDF trainees entered the buildings through windows rather than doors, to avoid booby traps. And they were sure that once they cleared an area they still left some soldiers to hold it — because Hezbollah has built many tunnels which connect the buildings and which allow its forces to pop up in buildings that have already been cleared.
Obviously, and as Etzioni points out, Option A would undoubtedly result in the deaths of a great many Israeli soldiers, to say nothing of Lebanese civilians. Etzioni notes that Israel adopted a similar strategy in the 2006 Lebanon War, which remains the most recent large-scale clash between Israel and Hezbollah. The strategy failed miserably, and Hezbollah was actually launching more missiles into Israel by the end of the war than when the missile attacks first started.
The Option B that Etzioni ultimately suggests is, to say the least, alarming:
On returning to the U.S., I asked two American military officers what other options Israel has. They both pointed to Fuel-Air Explosives [FAE]. These are bombs that disperse an aerosol cloud of fuel which is ignited by a detonator, producing massive explosions. The resulting rapidly expanding wave flattens all buildings within a considerable range
The Wikipedia page on FAEs, which are also termed “thermobaric weapons,” notes that such devices “have the longest sustained blast wave and most destructive force of any known explosive, excluding nuclear weapons.”
Etzioni dodges such a weapon’s obvious potential for immense collateral damage by stating that “such weapons obviously would be used only after the population was given a chance to evacuate the area.” Now granted, I’m no soldier and Etzioni certainly is. But it seems to me that the evacuation hope is impractical (at best). If civilians had the time and the means to evacuate a neighborhood prior to an attack, then the Hezbollah soldiers would likewise have time to evacuate–and probably bring many of their missiles and other weapons with them. Assuming this is the case, then a truly effective FAE attack would have to be carried out fairly quickly and would no doubt result in the loss of many civilian lives and absolute devastation to civilian property in the targeted area.
Would such an attack be proportional if it eliminated a massive missile cache that was being (and would continue to be) used to attack targets inside Israel? Reasonable minds could easily differ. That is a major part of the reason why Israel’s responses to missile and rocket attacks from Lebanon and Gaza inspire such heated debate. Israel often has no choice but to fire into civilian-populated areas if it wishes to destroy missile and rocket launch sites. That means that most effective Israeli attacks on such sites will result in civilian casualties. But how many such casualties would constitute a “disproportionate” price to pay for destroying those sites? And under Option A, how should the potential loss of life to Israeli soldiers in an urban combat operation to destroy the missiles be quantified? Could the life of an Israeli soldier be worth more or less than the life of a Lebanese civilian in the proportionality analysis? Plainly, these are all highly subjective issues, and even seasoned lawyers and military officers disagree when trying to answer them.
Now say that in addition to Option A (urban combat) and Option B (carpet bombing on steroids), there is an Option C: deploying an autonomous weapon system to northern Israel with instructions to monitor incoming missile attacks and execute a proportional military response. The problem with programming such an AWS should be obvious. How could we program the concept of “proportionality” into an AWS so that it could make LOAC-compliant attack decisions? To carry out a responsive attack without running afoul of LOAC, the AWS would either implicitly or explicitly answer all of the questions posed in the preceding paragraph–and that means that its human designers and commanders would have to find some way to quantify or otherwise formalize the value of civilian lives/property and the extent of military advantage that would be gained by an attack and encode them into an AWS.
This points to what is, I think, a central reason for the uneasiness that people have about AWSs. Most people are (understandably) squeamish enough when discussing how human military commanders should respond to Hezbollah rocket attacks. But we are positively repulsed by the concept of having to reduce human lives to 1s and 0s–and that is exactly what we would have to do if we were to program an AWS with discretion on whether to launch a lethal attack.