The Return of the Blog: WeRobot 2017


After a long layoff, Law and AI returns with some brief takes on the 6th annual WeRobot Conference, which was held this past weekend at Yale Law School’s Information Society Project.  If you want a true blow-by-blow account of the proceedings, check out Amanda Levendowski’s Twitter feed.  Consider the below a summary of things that piqued my interest, which will not necessarily be the same as the things that prove to be the most important technical or policy takeaways from the conference.

Luisa Scarcella and Michaela Georgina Lexer: The effects of artificial intelligence on labor markets – A critical analysis of solution models from a tax law and social security law perspective

(Paper, Presentation)

Ms. Scarcella and Ms. Lexer presented perhaps the most topically unique paper of the conference.  Their paper addresses the potential macroeconomic, social, and government-finance impacts of automation.

In their live presentation, both authors freely admitted that they (like everyone else) cannot say with any confidence what sorts of changes AI will bring about in our economy and society.  But in some of the scenarios they traced out, the authors suggest that large-scale automation of jobs could eviscerate the prevailing models for government finance and social security.  If AI displaces human workers en masse, it would undercut the wage-based tax model that most governments depend on for revenue, and which citizens in turn depend on for social security.  The authors point to the recent Finnish experiment with a unconditional basic income (perhaps more commonly called a universal basic income in the US) as a possible alternative model, along with a “robot tax” designed to largely replicate the model of the today’s wage-based tax regimes.

Kristen Thomasen: Feminist Perspectives on Drone Regulation

(Paper, Presentation)

Ms. Thomasen examines the implications of drone use and regulation from a critical feminist perspective, suggesting that the regulatory approach of focusing on the physical drone “overlook[s] the broader cultural and social practices associated with drone technologies” and “obfuscates the ways in which drone technology can reproduce, enhance, alter, or ameliorate existing social inequalities. . . .”  As Ms. Thomasen put it during the Q&A, the question shouldn’t be “here’s a drone, now how do we regulate it?”  Instead, it should be “here’s a social issue, now can we use technology to address it?”  Regulation should come only after those questions are taken into account.

Marc Canellas, Rachel Haga, et al: Framing Human-Automation Regulation: A New Modus Operandi from Cognitive Engineering

(Paper, Presentation)

This paper aims to apply insights from cognitive engineering (i.e., how humans interact with complex interactive technologies) to tech policy issues.  The authors framed their analysis around five issues pertaining to human-automated systems: complexity, definitions, transparency, accountability, and safety.  The authors suggest that asking questions that go to these five issues could serve as a “starting point for governance of any type of human-automation system.”  They also strongly encourage greater cross-disciplinary cooperation and coordination in creating legal and ethical frameworks for AI.

Tracy Pearl: Fast & Furious: The Misregulation of Driverless Cars

(Paper, Presentation)

Professor Pearl’s paper and presentation focuses on how current state regulations for autonomous and semi-autonomous vehicles often seem to misperceive the nature of the technology and misunderstand the role of the human “driver.”  The paper’s discussion of these issues is excellent, but what has really stayed with me was her insight about how public and regulatory responses to semi-autonomous vehicles (like Tesla’s Autopilot) could stifle progress toward full autonomy.  Professor Pearl noted that semi-autonomous vehicles actually might be less safe than a vehicle controlled by an attentive human driver simply because it will be difficult for a human driver to maintain focus on the task of driving when it’s the vehicle itself that is actually doing most of the driving.  Consequently, the human ostensible in control of a semi-autonomous car might not be prepared to take control when the semi-autonomous vehicle faces a situation it was not designed to handle.  Pearl suggests that accidents resulting from vehicles with such low-level automation might lead to restrictive regulations that keep us from achieving high-level automation and/or that drive consumers away from driverless technologies.

Lightning Round Panel 3 (Presentation)

The next-to-last session was a “lightning round” presentation featuring Garry Matthiason, Amanda Levendowski, Lauren Scholz, Kevin Miller, and yours truly.  I will just take a moment to highlight my fellow panelists papers:

Amanda Levendowski: How Copyright Law Creates Biased Artificial Intelligence (Paper)

This paper-in-progress takes a unique spin on the still-not-discussed-often-enough issue of bias and underrepresentation in AI.  Ms. Levendowski starts with the fairly inarguable premise that today’s AI systems “learn” by collecting data from available human sources (be they photographs, emails, etc).  She also notes that there has already been quite a bit of attention paid to the general problem of bias in AI.  Her insightful thesis is that the fundamental structure of copyright law, which renders a great deal of data unavailable for purposes of “training” AI systems, greatly exacerbates the bias problem because the most common sources of readily available data are demonstrably biased.  The paper as presented is just an abstract and an intro, but it promises to be a fascinating finished product.

Kevin Miller: A New Framework for Robot Privacy (Paper)

Mr. Miller’s paper focuses on the fact that robots will greatly complicate privacy issues in our society.  Unlike threats to privacy online, which is what most tech-oriented people think about when they hear the word “privacy,” robots actually have the capacity to affect a person’s physical privacy, whether by encroaching on a person’s personal space or using sensors to examine a person’s physical characteristics.  These issues are further complicated by the simple fact that different people can have very different standards when it comes to the level of privacy they expect.  Mr. Miller’s paper examines these issues and proposes a technical scheme that would attempt to address these multifaceted privacy concerns.

Lauren Henry Scholz: Algorithmic Contracts (Paper)

In one of my favorite papers in the conference, Ms. Scholz examines contracts where the parties’ obligations are determined (at least in part) through the use of algorithms.  She proposes that the framework of agency law would serve as a useful model for determining when parties should be bound by the terms of an algorithmic agreement–that is, under most circumstances, an algorithm could be viewed as an “agent” of the contracting party who utilized it.  Such a framework would both effectuate the intent of the contracting parties and help ensure that algorithmic contracts would generally be enforceable.


I will come back in a day or two with an analysis of the last two papers, which happen to be the ones that dovetail most closely with my own research.  Stay tuned!

 

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.