WeRobot 2017: Fault, liability, and regulation


The last panel of WeRobot 2017 produced what were perhaps my two favorite papers presented at the conference: “An Education Theory of Fault for Autonomous Systems” by Bill Smart and Cindy Grimm of Oregon State University’s Robotics Program and Woodrow Hartzog of Stanford Law School, and “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence,” by Michael Guihot, Anne Matthew, and Nicolas Suzor of the Queensland University of Technology.

It’s not surprising that both of these papers made an impression on me because each dealt with topics near and dear to my nerdy heart.  “An Education Theory of Fault” addresses with the thorny issue of how to determine culpability and responsibility when an autonomous system causes harm, in light of the inherent difficulty in predicting how such systems will operate.  “Nudging Robots” deals with the equally challenging issue of how to design a regulatory system that can manage the risks associated with AI.  Not incidentally, those are perhaps the two issues to which I have devoted the most attention in my own writings (both blog and scholarly).  And these two papers represent some of the strongest analysis I have seen on those issues.

An Education Theory of Fault

“An Education Theory of Fault” identifies three categories of stakeholders in the development and use of autonomous systems: developers of the automated technology, procurers who adapt the technology for particular applications, and end users.  The authors then identify “four specific and foreseeable education-failure points in the creation, deployment, and use of automated systems which contribute to harm caused by the unpredictability of autonomous systems”:

  • Syntactic failure
    • Definition: Failure of an automated system’s sensors to accurately identify real-world objects.
    • Point/Source of Failure: This is almost entirely within the control of the developers.
    • Education: Developers must communicate the system’s syntactic limitations to procurers.
  • Semantic failure
    • Definition: Failure to accurately translate human-articulated intent into software code.
    • Point/Source of Failure: This arises because human language is inherently less precise than the machine code that an autonomous system must use in its operations, and thus can flow either upstream or downstream and conceivably could occur anywhere on the developer/procurer/end user pipeline.  As a practical matter, however, the paper implies that the key communications are between developers and procurers.
    • Education: Downstream procurers and end users must accurately communicate (using human language) their requirements to upstream developers and procurers.  Those upstream entities must, in turn, accurately communicate how they have translated those human-language requirements into algorithms and provide downstream users with the necessary vocabulary for using the system.
  • Testing Failure
    • Definition: Failure to include a necessary syntactic or semantic test in the test set.    Also occurs when a test itself is invalid or not conducted appropriately.
    • Point/Source of Failure: Communications between procurers and developers.
    • Education: Similar to semantic failures–procurers must articulate desired use cases to developers, who must then ensure that the test set adequately covers the range of desired uses.
  • Warning Failure
    • Definition: Failure to make end users aware of potential problems and limitations.
    • Point/Source of Failure: Can occur in communications from developers to procurers, or from procurers to end users.
    • Education: Procurers must make the system’s limitations and acceptable uses clear to end users and warn them of avoidable dangers.

The idea is that each type of failure should be avoidable as long as the appropriate education/communication takes place and the appropriate tests are conducted.  Consequently, culpability can be assigned to the entity (developer, procurer, or end user) who failed to do his or her part in the education and testing process.  The framework suggests that the heaviest responsibilities fall upon developers, although procurers can play a key role in avoiding each type of failure as well.  This paper does a great job of crystallizing the types of failures that can occur with autonomous systems and provides a solid framework that can be used to analyze those failures and assign legal responsibility for resulting harms.

Nudging Robots

This paper covers the surprisingly-still-largely-unplowed ground of how to regulate autonomous systems.  As the authors note, regulation of AI will be difficult given the lack of expertise among national and international regulatory bodies, the remarkable power and scale of the key industry actors in AI (Google, Facebook, Amazon, Microsoft, etc), and the unique, decentralized ways in which AI development can occur.  The expertise gap occurs with all emerging technologies, but it might be particularly acute with AI–particularly with the advent of machine learning, which can make the inner workings of an AI system opaque even to the system’s creators.

The authors engage in a comprehensive examination of various theories of regulation that have been posited over the past half-century.  If you want an overview of scholarship on regulation in general, this paper is a pretty good primer.

In the end, the authors suggest that effective regulation of AI will require adaptability and flexibility on a scale not typically associated with government regulation.  The alternatives are doing nothing about the risks associated with AI (a worrying prospect given the power and promise of the technology) or implementing a rigid regulatory system that would deter the development of beneficial AI.  To overcome these barriers, the authors suggest a system where regulators (whether state or non-state) “nudge” the industry in a risk-optimal direction, rather than imposing a top-down form of regulation that could be ineffective, innovation-stifling, or both.  They point to industry transparency and broad stakeholder participation as key ingredients of effective AI regulation.

 

In both papers, the authors stress that they are not suggesting the final answer to the liability or regulatory challenges associated with AI, and explicitly recognize that we sorely need additional research and scholarship in these areas.  But these papers are fine contributions to that scholarship and excellent building blocks for further research.


The presentations and panel discussion for these papers can be viewed here.

Leave a Reply

Your email address will not be published. Required fields are marked *