Applying Old Rules to New Tools (and other updates)
My latest scholarly article, this one co-authored with Littler shareholders Marko Mrkonich and Allan King, is now available on SSRN and will be published in the South Carolina Law Review this winter. Here is the abstract:
Companies, policymakers, and scholars alike are paying increasing attention to algorithmic recruitment and hiring tools that leverage artificial intelligence, machine learning, and Big Data. To its advocates, algorithmic employee selection processes can be more effective in choosing the strongest candidates, increasing diversity, and reducing the influence of human prejudices. Many observers, however, express concern about other forms of bias that can infect algorithmic selection procedures, leading to fears regarding the potential for algorithms to create unintended discriminatory effects or mask more deliberate forms of discrimination. This article represents the most comprehensive analysis to date of the legal, ethical, and practical challenges associated with using these tools.
The article begins with background on both the nature of algorithmic selection tools and the legal backdrop of antidiscrimination laws. It then breaks down the key reasons why employers, courts and policymakers will struggle to fit these tools within the existing legal framework. These challenges include algorithmic tools’ reliance on correlation; the opacity of models generated by many algorithmic selection tools; and the difficulty in fitting algorithmic tools into a legal framework developed for the employee selection tools of the mid-20th century.
The article concludes with a comprehensive proposed legal framework that weaves together the usually separate analyses of disparate treatment and disparate impact. It takes the fundamental principles of antidiscrimination laws, and the landmark Supreme Court cases interpreting them, and articulates a set of standards that address the unique challenges posed by algorithmic tools. The proposed framework (1) uses tests of reasonableness in disparate impact analysis in place of tests of statistical significance, which will become less and less meaningful in the age of Big Data; (2) requires employers to satisfy a modified form of the business necessity defense when an algorithmic tool has a disparate impact on a protected group; and (3) allows employers to use novel machine-learning techniques to prevent disparate impacts from arising without exposing themselves to disparate treatment liability.
While I was wrapping up that article, a new textbook came to my attention: Law As Data, a compilation of essays edited by Michael Livermore and Daniel Rockmore that looks at various application of data analysis in law. I haven’t read the entire tract, but the introduction explaining the philosophy that underlies the treatise troubled me; it is probably the most visible work yet that posits that law is nothing more than a set of of formal rules and logic that is essentially a “problem” that AI could theoretically “solve” using computation (as with checkers), or, at least, something on which machines will soon be able to “outperform” humans (as in Chess and Go). But there are many moral and philosophical objections with this view of what law is. My next substantive post will more deeply explore those issues.
In the meantime, the most significant AI-related legislation from the past few months was the enactment of a California law restricting the use of deepfakes in political messaging in the run-up to elections. I made appearances speaking about the bill on KPCC’s Airtalk both before and after it was enacted, appearing first alongside Eugene Volokh and then alongside Erwin Chemerinsky. Short version of my take: the new law might have some deterrent effect in the context of California state and local elections, but I think that the social media platforms on which deepfakes can be distributed are better-positioned to “regulate” the spread of deepfakes than individual state or even national governments are. The question is whether they will do so.