Personal updates and the House SELF DRIVE Act


 

I’ll start this week with a couple personal updates before moving onto the biggest A.I. policy news item of the week–although the news is not quite as exciting as some media headlines have made it sound.

Personal Item #1: New gig

A few weeks ago, I switched jobs and joined a new law firm–Littler Mendelson, P.C..  In addition to being the world’s largest labor and employment law firm, Littler has a Robotics, A.I., and Automation practice group that Garry Mathiason (a legend in both employment law and robotics law) started a few years back.  I’ve hit the ground running with both the firm and its practice group during the past few weeks, and my busy stretch will continue for awhile.  I’ll try to make updates as much as I can, particularly when big A.I. law and policy news hits, but updates will likely be on the light side for the next several weeks.

Personal Item #2: O’Reilly A.I. Conference

Next week, I’ll be presenting at the O’Reilly A.I. Conference in San Francisco along with Danny Guillory, the head of Global Diversity and Inclusion at Autodesk.  Our presentation is titled “Building an Unbiased A.I.: End-to-end diversity and inclusion in AI development.”  If you’ll be at the conference, come check it out.

Personal Item #3: Drone Law Today

One last personal item–I made my second appearance on Steve Hogan’s Drone Law Today podcast.  Steve and I had a fascinating conversation on the possibility of legal personhood for A.I.–both how feasible personhood is now (not very) and how society will react if and when robots do eventually start walking amongst us.  This was one of the most fun and interesting conversations I’ve ever had, so check it out.

A.I. policy news: House passes SELF DRIVE Act

I’ll close with the big news item relating to A.I. policy–the U.S. House of Representatives’ passage of the SELF DRIVE Act.  The bill, as its title suggests, would open up the way for self-driving cars to hit the road without having to comply with NHTSA regulations–which otherwise would present a major hurdle to the deployment of autonomous vehicles.

The Senate is also considering self-driving car legislation, and that legislation apparently differs from the House bill quite dramatically.  That means that the two houses will have to reconcile their respective bills in conference, and observers of American politics (and watchers of Schoolhouse Rock) know that the bill that emerges from conference may end up looking nothing like either of the original bills.  Passage of the Senate bill sounds highly likely, although congressional gridlock means that it’s still possible the bill will not come up for a vote this year.  We’ll see what (if anything) emerges from the Senate, at which point we’ll hopefully have a better sense of what the final law will look like.

How to Regulate Artificial Intelligence Without Really Trying


This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence.  The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence.  Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.

Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics.  Here’s Etzioni’s trio:

  1. “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.”  For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
  2. “[A]n A.I. system must clearly disclose that it is not human.”
  3. “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”

These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing.  But that’s the easy part of deciding how to regulate A.I.  The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing.  Should we hold the designer(s) of the A.I. system accountable?  Or the immediate operator?  Or maybe the system itself?  No one will argue with the point that an autonomous car shouldn’t run red lights.  It’s less clear who should be held responsible when it does.

Etzioni’s op-ed takes no discernible position on these issues.  The first rule seems to imply that the A.I. system itself should be held responsible.  But since A.I. systems are not legal persons, that’s a legal impossibility at present.  And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible.  The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”

I don’t have any issue with where Etzioni wants us to go.  I’m just not sure how he thinks we’re supposed to get there.