NHTSA and Autonomous Vehicles (Part 3): Hearings and Strange Bedfellows

BowAd1UCIAAg1xJ


This is the final segment in a three-part series on NHTSA and autonomous vehicles.  The first two parts can be read here and here.


So what went down at NHTSA’s two public hearings?  I could not find video of the first hearing, which was held in Washington DC, and so I’ve relied on press reports of the goings-on at that initial hearing.  The full video of the second hearing, which was held in Silicon Valley, is available on YouTube.

Most of the speakers at these two hearings were representatives of tech and automotive industry companies, trade organizations, and disability advocacy groups who touted the promise and benefits that AV technologies will bring.  Already, vehicles with automated features have a level of situational awareness that even the most alert human driver could never hope to match.  Sensors and cameras can detect everything that is going on around the vehicle in every direction–and AI systems can ‘focus’ on all that information more-or-less simultaneously.  Human drivers, by contrast, have a limited field of vision and have trouble maintaining awareness of everything that is going on even in that narrow field.

AI drivers also won’t get drunk, get tired, or text while driving.  (Well, actually they could send texts while driving, but unlike with humans, doing so would not hinder their ability to safely operate a vehicle).  Their reaction time can make human drivers look like sloths.  Perhaps most significantly, they could give people with physical disabilities the ability to commute and travel without the need to rely on other people to drive them.  If you follow developments in the field, then all of that is old news–but that does not make it any less enticing.

» Read more

NHTSA and Autonomous Vehicles (Part 2): Will Regulations (Or Lack Thereof) Keep Automated Vehicle Development Stuck in Neutral?

Source: DailyMail.com

Source: DailyMail.com


This is part 2 of a series on NHTSA and Autonomous Vehicles.  Part 1, published May 8, discussed the 5 levels of automation that NHTSA established, with Level 0 being a completely human controlled car and Level 4 being a vehicle that is capable of completely autonomous operation on the roads.  Part 3 discusses NHTSA’s April 2016 public hearings on the subject.


I must confess that I am very much an optimist about the promise of Level 4 vehicles–and not just because I really, really love the idea of having the ability to do stuff on my commute to work without having to scramble for one of the 2 good seats on a Portland bus (yes, there are always only 2). The potential benefits that autonomous vehicles could bring are already well-publicized, so I won’t spend much time rehashing them here.  Suffice it to say, in addition to the added convenience of AVs, such vehicles should prove to be far safer than vehicles controlled by human drivers and would provide persons with physical disabilities with a much greater ability to get around without having to rely on other people.

But while I am optimistic about the benefits of Level 4 vehicles, I am not optimistic that NHTSA–and NHTSA’s counterparts in other countries–will act quickly enough to ensure that Level 4 vehicles will be able to hit the road as soon as they could and should.  As prior posts have noted, there are few federal regulations (i.e., rules that appear in the Federal Motor Vehicle Safety Standards) that would present a significant obstacle to vehicles with up to Level 3 automation.  But going from Level 3 to Level 4 may present difficulties–especially if, as in the case of Google’s self-driving car, the vehicle is designed in a manner (e.g., without a steering wheel, foot brakes, or transmission stick) that makes it impossible for a human driver to take control of the vehicle.

The difficulty of changing regulations to allow Level 4 vehicles creates a risk that automated vehicle technology will be stuck at Level 2 and Level 3 for a long time–and that might be worse than the current mix of Level 0, Level 1, and ‘weak’ Level 2 vehicles that fill most of the developed world’s roads.

» Read more

NHTSA and Autonomous Vehicles (Part 1): The 5 levels of automation

Dilbert


During the last month, the National Highway Traffic Safety Administration (“NHTSA,” the agency that didn’t redefine “driver” in February) held two public hearings on autonomous vehicles (“AVs”), one in Washington DC on April 8 and another at Stanford, in the heart of Silicon Valley, on April 27.  In keeping with what you might expect, press reports of the two events suggested that the Silicon Valley gathering attracted the voices of people more enthusiastic about the promise of AVs and more intent on urging NHTSA not to let regulations stifle innovation in the field.

These public hearings are an important and positive sign that the NHTSA is serious about moving forward with the regulatory changes that will be necessary before autonomous vehicles become available to the general public.  But before turning to what went down at these hearings (and to buy some time for me to watch through the full video of the second hearing), it’s worth pausing to give some background on NHTSA’s involvement with autonomous vehicles.

NHTSA has shown increasing interest in automation since 2013, when it issued an official policy statement that defined five levels of vehicle automation.

» Read more

Too smart for our own good?

Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams


Two stories this past week caught my eye.  The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip.  Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.”  Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.”  But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities.  Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”

The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks.  But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.

I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell.  (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets from a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell.  But despite the strawmen and snide comments, Elkus makes some good points on the vexing issue of how to program ethics and morality into AI systems.

» Read more

Selective Revelation: Should we let robojudges issue surveillance and search warrants?

Credit: SimplySteno Court Reporting Blog

Credit: SimplySteno Court Reporting Blog


AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers.  Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight.  But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.

Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.”  Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.

She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant:

» Read more

Destroying Hezbollah’s missile cache: A proportionality case study and its implications for autonomous weapons


 

Source: Reuters, via Should Israel Consider Using Devastating Weapons Against Hezbollah Missiles? – Opinion – Haaretz – Israeli News Source Haaretz.com

The concept of proportionality is central to the Law of Armed Conflict (LOAC), which governs the circumstances under which lethal military attacks can be launched under international law.  Proportionality in this context means that the harm done to civilians and civilian property in a given attack must not be excessive in light of the military advantage expected to be gained by an attack.  Conceptually, proportionality is supposed to evoke something resembling the scales of justice; if the “weight” of the civilian harm exceeds the “weight” of the military advantage, then an attack must not be launched.  But, of course, proportionality determinations are highly subjective.  The value of civilian property might be easy enough to determine, but there is no easy or obvious way to quantify the “value” of human lives or objects and buildings of religious or historical (as opposed to economic) significance.  Similarly, “military advantage” is not something that can easily be quantified, and there certainly is no accepted method of “comparing” expected military advantage to the value of civilian lives.

Consider this opinion piece by Amitai Etzioni.  One of the greatest threats to Israel’s security comes from Hezbollah, a Lebanese Shi’a political party and paramilitary force that has carried out numerous terrorist attacks against Israel.  Hezbollah has a cache of 100,000 missiles and rockets, many-to-most of which it no doubt would launch into Israel if hostilities between Israel and Hezbollah were to rekindle.  But since most of the missiles are located in private civilian homes, Etzioni asks: “If Hezbollah starts raining them down on Israel, how can these missiles be eliminated without causing massive civilian casualties?”

» Read more

Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 2): Automated vehicle concepts

sfl-googles-new-driverless-car-20140601-001


As discussed in the first part of this analysis, the USDOT Volpe Center’s review of federal regulations (i.e., the Federal Motor Vehicle Safety Standards, or FMVSS) for autonomous vehicles had two components: a “Driver Reference Scan,” which combed through the FMVSS to identify all references to human drivers; and an “Automated Vehicle Concepts Scan,” which examined which of the FMVSS would present regulatory obstacles for the manufacturers of autonomous vehicles.  To perform this scan, the authors of the Volpe Center report identified thirteen separate types of “automated vehicle concepts” or designs, “ranging from near-term automated technologies (e.g., traffic jam assist) to fully automated vehicles that lack any mechanism for human operation.”

Here are those automated vehicle concepts as defined and described in the Volpe report:

» Read more

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

Warning


By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”

(* Btw, I’m officially old–I had to consult Urban Dictionary to confirm that I was correctly understanding what “fam” and “zero chill” meant. “Fam” means “someone you consider family” and “no chill” means “being particularly reckless,” in case you were wondering.)

The remainder of the tagline declared: “The more you talk the smarter Tay gets.”

Or not.  Within 24 hours of going online, Tay started saying some weird stuff.  And then some offensive stuff.  And then some really offensive stuff.  Like calling Zoe Quinn a “stupid whore.”  And saying that the Holocaust was “made up.”  And saying that black people (she used a far more offensive term) should be put in concentration camps.  And that she supports a Mexican genocide.  The list goes on.

So what happened?  How could a chatbot go full Goebbels within a day of being switched on?  Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things.  At first, Tay simply repeated the inappropriate things that the trolls said to her.  But before too long, Tay had “learned” to say inappropriate things without a human goading her to do so.  This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.

Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened–of course a chatbot designed with “zero chill” would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things.  But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened.

» Read more

Analysis of the USDOT’s Regulatory Review for Self-Driving Cars (Part 1): References to “drivers” in the federal regulations

Editor’s Note: Apologies for the unannounced gap between posts.  I have been on parental leave for the past two weeks bonding with my newborn daughter.  In lieu of the traditional cartoon, I will be spamming you today with a photo of Julia (see bottom of post).  Now, back to AI.


The U.S. Department of Transportation recently released a report “identifying potential barriers and challenges for the certification of automated vehicles” under the current Federal Motor Vehicle Safety Standards (FMVSS).  Identifying such barriers is essential to the development and deployment of autonomous vehicles because the manufacturer of a new motor vehicle must certify that it complies with the FMVSS.

The FMVSS require American cars and trucks to include numerous operational and safety features, ranging from brake pedals to warning lights to airbags.  It also specifies test procedures designed to assess new vehicles’ safety and whether they comply with the FMVSS.

The new USDOT report consists of two components: (1) a review of the FMVSS “to identify which standards include an implicit or explicit reference to a human driver,” which the report’s authors call a driver reference scan; and (2) a review that evaluates the FMVSS against “13 different automated vehicle concepts, ranging from limited levels of automation . . . to highly automated, driverless concepts with innovative vehicle designs,” termed an automated vehicle concepts scan.  This post will address the driver reference scan, which dovetails nicely from my previous post on automated vehicles.

As noted in that post, the FMVSS defines a “driver” as “the occupant of a motor vehicle seated immediately behind the steering control system.”  It is clear both from this definition and from other regulations that “driver” thus refers to a human driver.  (And again, as explained in my previous post, the NHTSA’s recent letter to Google did not change this regulation or redefine “driver” under the FMVSS, media reports to the contrary notwithstanding.)  Any FMVSS reference to a “driver” thus presents a regulatory compliance challenge for makers of truly self-driving cars, since such vehicles may not have a human driver–or, in some cases, even a human occupant.

» Read more

Who’s to Blame (Part 6): Potential Legal Solutions to the AWS Accountability Problem

The law abhors a vacuum.  So it is all but certain that, sooner or later, international law will come up with mechanisms for fixing the autonomous weapon system (AWS) accountability problem.  How might the current AWS accountability gap be filled?

The simplest solution—and the one advanced by Human Rights Watch (HRW) and the not-so-subtly-named Campaign to Stop Killer Robots (CSKR)—is to ban “fully autonomous” weapon systems completely.  As noted in the second entry in this series, the HRW defines such an AWS as one that can select and engage targets without specific orders from a human commander (that is, without human direction) and operate without real-time human supervision (that is, monitoring and control). One route to such a ban would be adding an AWS-specific protocol to the Convention on Certain Conventional Weapons (CCW), which covers incendiary weapons, landmines, and a few other categories of conventional (i.e., not nuclear, biological, or chemical) weapons. The signatories to the CCW held informal meetings on AWSs in May 2014 and April 2015, but it does not appear that the addition of an AWS protocol to the CCW is under formal consideration.

In any event, there is ample reason to question whether the CCW would be an effective vehicle for regulating AWSs. The current CCW contains few outright bans on the weapons it covers (the CCW protocol on incendiary weapons does not bar the napalming of enemy forces) and has no mechanisms whatsoever for verification or enforcement.  The CCW’s limited impact on landmines is illustrated by the fact that the International Campaign to Ban Landmines (which, incidentally, seriously needs to hire someone to design a new logo) was created nine years after the CCW’s protocol covering landmines went into effect.

Moreover, even an outright ban on “fully” autonomous weapons does not adequately account for the fact that weapon systems can have varying types and degrees of autonomy.  Serious legal risks would still accompany the deployment of AWSs with only limited autonomy, but those risks would not be covered by a ban on fully autonomous weapons.

A more balanced solution might require continuous human monitoring and adequate means of control whenever an AWS is deployed in combat, with a presumption of negligence (and therefore command responsibility) attaching to the commander responsible for monitoring and controlling an AWS that commits an illegal act.  That presumption could only be overcome if the human being shows   This would ensure that at least one human being would always have a strong legal incentive to supervise an AWS that is engaged in combat operations.

An even stronger form of command responsibility based on strict liability might seem tempting at first, but applying a strict liability standard to command responsibility for AWSs would be problematic because, as noted in the previous entry in this series, multiple officers in the chain of command may play a role in deciding whether, when, where, and how to deploy an AWS during a particular operation (to say nothing of the personnel responsible for designing and programming the AWS).  It would be difficult to fairly determine how far up (or down) the chain of command and how far back in time criminal responsibility should attach.


Much, much more can and will be said about each of the above topics in the coming weeks and months.  For now, here are a few recommendations for deeper discussions on the legal accountability issues surrounding AWSs:

  • Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots (2015)
  • International Committee of the Red Cross, Autonomous weapon systems technical, military, legal and humanitarian aspects (2014)
  • Michael N. Schmitt & Jeffrey S. Thurnher, “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict, 4 Harv. Nat’l Sec. J. 231 (2013)
  • Gary D. Solis, The Law of Armed Conflict: International Humanitarian Law in War (2015), chapters 10 (“Command Responsibility and Respondeat Superior“) and 16 (“The 1980 Certain Conventional Weapons Convention”)
  • U.S. Department of Defense Directive No. 3000.09 (“Autonomy in Weapon Systems”), issued Nov. 21, 2012
  • Wendell Wallach & Colin Allen, Framing Robot Arms Control, 15 Ethics and Information Technology 125 (2013)
1 3 4 5 6