Destroying Hezbollah’s missile cache: A proportionality case study and its implications for autonomous weapons


 

Source: Reuters, via Should Israel Consider Using Devastating Weapons Against Hezbollah Missiles? – Opinion – Haaretz – Israeli News Source Haaretz.com

The concept of proportionality is central to the Law of Armed Conflict (LOAC), which governs the circumstances under which lethal military attacks can be launched under international law.  Proportionality in this context means that the harm done to civilians and civilian property in a given attack must not be excessive in light of the military advantage expected to be gained by an attack.  Conceptually, proportionality is supposed to evoke something resembling the scales of justice; if the “weight” of the civilian harm exceeds the “weight” of the military advantage, then an attack must not be launched.  But, of course, proportionality determinations are highly subjective.  The value of civilian property might be easy enough to determine, but there is no easy or obvious way to quantify the “value” of human lives or objects and buildings of religious or historical (as opposed to economic) significance.  Similarly, “military advantage” is not something that can easily be quantified, and there certainly is no accepted method of “comparing” expected military advantage to the value of civilian lives.

Consider this opinion piece by Amitai Etzioni.  One of the greatest threats to Israel’s security comes from Hezbollah, a Lebanese Shi’a political party and paramilitary force that has carried out numerous terrorist attacks against Israel.  Hezbollah has a cache of 100,000 missiles and rockets, many-to-most of which it no doubt would launch into Israel if hostilities between Israel and Hezbollah were to rekindle.  But since most of the missiles are located in private civilian homes, Etzioni asks: “If Hezbollah starts raining them down on Israel, how can these missiles be eliminated without causing massive civilian casualties?”

» Read more

Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 2): Automated vehicle concepts

sfl-googles-new-driverless-car-20140601-001


As discussed in the first part of this analysis, the USDOT Volpe Center’s review of federal regulations (i.e., the Federal Motor Vehicle Safety Standards, or FMVSS) for autonomous vehicles had two components: a “Driver Reference Scan,” which combed through the FMVSS to identify all references to human drivers; and an “Automated Vehicle Concepts Scan,” which examined which of the FMVSS would present regulatory obstacles for the manufacturers of autonomous vehicles.  To perform this scan, the authors of the Volpe Center report identified thirteen separate types of “automated vehicle concepts” or designs, “ranging from near-term automated technologies (e.g., traffic jam assist) to fully automated vehicles that lack any mechanism for human operation.”

Here are those automated vehicle concepts as defined and described in the Volpe report:

» Read more

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

Warning


By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”

(* Btw, I’m officially old–I had to consult Urban Dictionary to confirm that I was correctly understanding what “fam” and “zero chill” meant. “Fam” means “someone you consider family” and “no chill” means “being particularly reckless,” in case you were wondering.)

The remainder of the tagline declared: “The more you talk the smarter Tay gets.”

Or not.  Within 24 hours of going online, Tay started saying some weird stuff.  And then some offensive stuff.  And then some really offensive stuff.  Like calling Zoe Quinn a “stupid whore.”  And saying that the Holocaust was “made up.”  And saying that black people (she used a far more offensive term) should be put in concentration camps.  And that she supports a Mexican genocide.  The list goes on.

So what happened?  How could a chatbot go full Goebbels within a day of being switched on?  Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things.  At first, Tay simply repeated the inappropriate things that the trolls said to her.  But before too long, Tay had “learned” to say inappropriate things without a human goading her to do so.  This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.

Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened–of course a chatbot designed with “zero chill” would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things.  But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened.

» Read more

Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 1): References to “drivers” in the federal regulations

Editor’s Note: Apologies for the unannounced gap between posts.  I have been on parental leave for the past two weeks bonding with my newborn daughter.  In lieu of the traditional cartoon, I will be spamming you today with a photo of Julia (see bottom of post).  Now, back to AI.


The U.S. Department of Transportation recently released a report “identifying potential barriers and challenges for the certification of automated vehicles” under the current Federal Motor Vehicle Safety Standards (FMVSS).  Identifying such barriers is essential to the development and deployment of autonomous vehicles because the manufacturer of a new motor vehicle must certify that it complies with the FMVSS.

The FMVSS require American cars and trucks to include numerous operational and safety features, ranging from brake pedals to warning lights to airbags.  It also specifies test procedures designed to assess new vehicles’ safety and whether they comply with the FMVSS.

The new USDOT report consists of two components: (1) a review of the FMVSS “to identify which standards include an implicit or explicit reference to a human driver,” which the report’s authors call a driver reference scan; and (2) a review that evaluates the FMVSS against “13 different automated vehicle concepts, ranging from limited levels of automation . . . to highly automated, driverless concepts with innovative vehicle designs,” termed an automated vehicle concepts scan.  This post will address the driver reference scan, which dovetails nicely from my previous post on automated vehicles.

As noted in that post, the FMVSS defines a “driver” as “the occupant of a motor vehicle seated immediately behind the steering control system.”  It is clear both from this definition and from other regulations that “driver” thus refers to a human driver.  (And again, as explained in my previous post, the NHTSA’s recent letter to Google did not change this regulation or redefine “driver” under the FMVSS, media reports to the contrary notwithstanding.)  Any FMVSS reference to a “driver” thus presents a regulatory compliance challenge for makers of truly self-driving cars, since such vehicles may not have a human driver–or, in some cases, even a human occupant.

» Read more

Who’s to Blame (Part 6): Potential Legal Solutions to the AWS Accountability Problem

The law abhors a vacuum.  So it is all but certain that, sooner or later, international law will come up with mechanisms for fixing the autonomous weapon system (AWS) accountability problem.  How might the current AWS accountability gap be filled?

The simplest solution—and the one advanced by Human Rights Watch (HRW) and the not-so-subtly-named Campaign to Stop Killer Robots (CSKR)—is to ban “fully autonomous” weapon systems completely.  As noted in the second entry in this series, the HRW defines such an AWS as one that can select and engage targets without specific orders from a human commander (that is, without human direction) and operate without real-time human supervision (that is, monitoring and control). One route to such a ban would be adding an AWS-specific protocol to the Convention on Certain Conventional Weapons (CCW), which covers incendiary weapons, landmines, and a few other categories of conventional (i.e., not nuclear, biological, or chemical) weapons. The signatories to the CCW held informal meetings on AWSs in May 2014 and April 2015, but it does not appear that the addition of an AWS protocol to the CCW is under formal consideration.

In any event, there is ample reason to question whether the CCW would be an effective vehicle for regulating AWSs. The current CCW contains few outright bans on the weapons it covers (the CCW protocol on incendiary weapons does not bar the napalming of enemy forces) and has no mechanisms whatsoever for verification or enforcement.  The CCW’s limited impact on landmines is illustrated by the fact that the International Campaign to Ban Landmines (which, incidentally, seriously needs to hire someone to design a new logo) was created nine years after the CCW’s protocol covering landmines went into effect.

Moreover, even an outright ban on “fully” autonomous weapons does not adequately account for the fact that weapon systems can have varying types and degrees of autonomy.  Serious legal risks would still accompany the deployment of AWSs with only limited autonomy, but those risks would not be covered by a ban on fully autonomous weapons.

A more balanced solution might require continuous human monitoring and adequate means of control whenever an AWS is deployed in combat, with a presumption of negligence (and therefore command responsibility) attaching to the commander responsible for monitoring and controlling an AWS that commits an illegal act.  That presumption could only be overcome if the human being shows   This would ensure that at least one human being would always have a strong legal incentive to supervise an AWS that is engaged in combat operations.

An even stronger form of command responsibility based on strict liability might seem tempting at first, but applying a strict liability standard to command responsibility for AWSs would be problematic because, as noted in the previous entry in this series, multiple officers in the chain of command may play a role in deciding whether, when, where, and how to deploy an AWS during a particular operation (to say nothing of the personnel responsible for designing and programming the AWS).  It would be difficult to fairly determine how far up (or down) the chain of command and how far back in time criminal responsibility should attach.


Much, much more can and will be said about each of the above topics in the coming weeks and months.  For now, here are a few recommendations for deeper discussions on the legal accountability issues surrounding AWSs:

  • Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots (2015)
  • International Committee of the Red Cross, Autonomous weapon systems technical, military, legal and humanitarian aspects (2014)
  • Michael N. Schmitt & Jeffrey S. Thurnher, “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict, 4 Harv. Nat’l Sec. J. 231 (2013)
  • Gary D. Solis, The Law of Armed Conflict: International Humanitarian Law in War (2015), chapters 10 (“Command Responsibility and Respondeat Superior“) and 16 (“The 1980 Certain Conventional Weapons Convention”)
  • U.S. Department of Defense Directive No. 3000.09 (“Autonomy in Weapon Systems”), issued Nov. 21, 2012
  • Wendell Wallach & Colin Allen, Framing Robot Arms Control, 15 Ethics and Information Technology 125 (2013)

Will technology send us stumbling into negligence?

Two stories that broke this week illustrate the hazards that can come from our ever-increasing reliance on technology.  The first story is about an experiment conducted at Georgia Tech where a majority of students disregarded their common sense and followed the path indicated by a robot wearing a sign that read “EMERGENCY GUIDE ROBOT”:

A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door. The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.

The vast majority of students–26 out of the 30 included in the experiment–went where the robot was pointing.  As it turned out, there was no exit in that direction.  The remaining four students either stayed in the room or were unable to complete the experiment.  No student, it seems, simply went out the way they came in.

Many of the students attributed their decision to disregard the correct exit to the “Emergency Guide Robot” sign, which suggested that the robot was specifically designed to tell them where to go in emergency situations.  According to the Georgia Tech researchers, these results suggest that people will “automatically trust” a robot that “is designed to do a particular task.”  The lead researcher analogized this trust “to the way in which drivers sometimes follow the odd routes mapped by their GPS devices,” saying that “[a]s long as a robot can communicate its intentions in some way, people will probably trust it in most situations.”

As if on cue, this happened the very same day that the study was released:

» Read more

Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons

Dilbert

Source: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams


An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors. It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without violating the laws of war. An American Lt. General then orders that 50 of these units be deployed during a conflict in the Persian Gulf for use in ongoing urban combat in several cities. One of those units had previously seen action in urban combat in the Caucuses and desert combat during the same Persian Gulf conflict, all without incident. A Major makes the decision to deploy that AWS unit to assist a platoon engaged in block-to-block urban combat in Sana’a. Once the AWS unit is on the ground, a Lieutenant is responsible for telling the AWS where to go. The Lt. General, the Major, and the Lieutenant all had previous experience using this model of AWS and had given similar orders to these in prior combat situations without incident.

The Lieutenant has lost several men to enemy snipers over the past several weeks. He orders the AWS to accompany one of the squads under his command and preemptively strike any enemy sniper nests it detects–again, an order he had given to other AWS units before without incident. This time, the AWS unit misidentifies a nearby civilian house as containing a sniper nest, based on the fact that houses with similar features had frequently been used as sniper nests in the Caucuses conflict. It launches a RPG at the house. There are no snipers inside, but there are 10 civilians–all of whom are killed by the RPG. Human soldiers who had been fighting in the area would have known that that particular house likely did not contain a sniper’s nest because the glare from the sun off a nearby glass building reduces visibility on that side of the street at the times of day that American soldiers typically patrol the area–a fact that the human soldiers knew well from prior combat in the area, but a variable that the AWS had not been programmed to take into consideration.

In my most recent post for FLI on autonomous weapons, I noted that it would be difficult for humans to predict the actions of autonomous weapon systems (AWSs) programmed with machine learning capabilities.  If the military commanders responsible for deploying AWSs were unable to reliably foresee how the AWS would operate on the battlefield, it would be difficult to hold those commanders responsible if the AWS violates the law of armed conflict (LOAC).  And in the absence of command responsibility, it is not clear whether any human could be held responsible under the existing LOAC framework.

A side comment from a lawyer on Reddit made me realize that my reference to “foreseeability” requires a bit more explanation.  “Foreseeability” is one of those terms that makes lawyers’ ears perk up when they hear it because it’s a concept that every American law student encounters when learning the principles of negligence in their first-year class on Tort Law.

» Read more

Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?

accountability-joke


The previous entry in this series examined why it would be very difficult to ensure that autonomous weapon systems (AWSs) consistently comply with the laws of war.  So what would happen if an attack by an AWS resulted in the needless death of civilians or otherwise constituted a violation of the laws of war?  Who would be held legally responsible?

In that regard, AWSs’ ability to operate free of human direction, monitoring, and control would raise legal concerns not shared by drones and other earlier generations of military technology.  It is not clear who, if anyone, could be held accountable if and when AWS attacks result in illegal harm to civilians and their property.  This “accountability gap” was the focus of a 2015 Human Rights Watch report.  The HRW report ultimately concluded that there was no plausible way to resolve the accountability issue and therefore called for a complete ban on fully autonomous weapons.

Although some commentators have taken issue with this prescription, the diagnosis seems to be correct—it simply is not clear who could be held responsible if an AWS commits an illegal act.  This accountability gap exists because AWSs incorporate AI technology could collect information and determine courses of action based on the conditions in which they operate.  It is unlikely that even the most careful human programmers could predict the nearly infinite on-the-ground circumstances that an AWS could face.  It would therefore be difficult for an AWS designer–to say nothing of its military operators–to foresee how the AWS would react in the fluid, fast-changing world of combat operations.  The inability to foresee an AWS’s actions would complicate the assignment of legal responsibility.

» Read more

On subjectivity, AI systems, and legal decision-making

Source: Dilbert

The latest entry in my series of posts on autonomous weapon systems (AWSs) suggested that it would be exceedingly difficult to ensure that AWSs complied with the laws of war.  A key reason for this difficulty is that the laws of war depend heavily on subjective determinations.  One might easily expand this point and argue that AI systems cannot–or should not–make any decisions that require interpreting or applying law because such legal determinations are inherently subjective.

Ever the former judicial clerk, I can’t resist pausing for a moment to define my terms.  “Subjective” can have subtly different meanings depending on the context.  Here, I’m using the term to mean something that is a matter of opinion rather than a matter of fact.  In law, I would say that identifying what words are used in the Second Amendment is an objective matter; discerning what those words mean is a subjective matter.  All nine justices who decided DC v. Heller (and indeed, anyone with access to an accurate copy of the Bill of Rights) agreed that the Second Amendment reads: “A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.”  They disagreed quite sharply about what those words mean and how they relate to each other.  (Legal experts even disagree on what the commas in the Second Amendment mean).

Given that definition of “subjective,” here are some observations.

» Read more

Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict?

“Robots won’t commit war crimes. We just have to program them to follow the laws of war.” This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers. But designing such autonomous weapon systems (AWSs) is far easier said than done. True, if we could design and program AWSs that always obeyed the international law of armed conflict (LOAC), then the issues raised in the previous segment of this series — which suggested the need for human direction, monitoring, and control of AWSs — would be completely unfounded. But even if such programming prowess is possible, it seems unlikely to be achieved anytime soon. Instead, we need be prepared for powerful AWS that may not recognize where the lines blur between what is legal and reasonable during combat and what is not.

While the basic LOAC principles seem straightforward at first glance, their application in any given military situation depends heavily on the specific circumstances in which combat takes place. And the difference between legal and illegal acts can be blurry and subjective. It therefore would be difficult to reduce the laws and principles of armed conflict into a definite and programmable form that could be encoded into the AWS and, from which the AWS could consistently make battlefield decisions that comply with the laws of war.

» Read more

1 3 4 5 6