How to Regulate Artificial Intelligence Without Really Trying
This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence. The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence. Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.
Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics. Here’s Etzioni’s trio:
- “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.” For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
- “[A]n A.I. system must clearly disclose that it is not human.”
- “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”
These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing. But that’s the easy part of deciding how to regulate A.I. The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing. Should we hold the designer(s) of the A.I. system accountable? Or the immediate operator? Or maybe the system itself? No one will argue with the point that an autonomous car shouldn’t run red lights. It’s less clear who should be held responsible when it does.
Etzioni’s op-ed takes no discernible position on these issues. The first rule seems to imply that the A.I. system itself should be held responsible. But since A.I. systems are not legal persons, that’s a legal impossibility at present. And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible. The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”
I don’t have any issue with where Etzioni wants us to go. I’m just not sure how he thinks we’re supposed to get there.