Applying Old Rules to New Tools (and other updates)

My latest scholarly article, this one co-authored with Littler shareholders Marko Mrkonich and Allan King, is now available on SSRN and will be published in the South Carolina Law Review this winter. Here is the abstract:

Companies, policymakers, and scholars alike are paying increasing attention to algorithmic recruitment and hiring tools that leverage artificial intelligence, machine learning, and Big Data. To its advocates, algorithmic employee selection processes can be more effective in choosing the strongest candidates, increasing diversity, and reducing the influence of human prejudices. Many observers, however, express concern about other forms of bias that can infect algorithmic selection procedures, leading to fears regarding the potential for algorithms to create unintended discriminatory effects or mask more deliberate forms of discrimination. This article represents the most comprehensive analysis to date of the legal, ethical, and practical challenges associated with using these tools.

The article begins with background on both the nature of algorithmic selection tools and the legal backdrop of antidiscrimination laws. It then breaks down the key reasons why employers, courts and policymakers will struggle to fit these tools within the existing legal framework. These challenges include algorithmic tools’ reliance on correlation; the opacity of models generated by many algorithmic selection tools; and the difficulty in fitting algorithmic tools into a legal framework developed for the employee selection tools of the mid-20th century.

The article concludes with a comprehensive proposed legal framework that weaves together the usually separate analyses of disparate treatment and disparate impact. It takes the fundamental principles of antidiscrimination laws, and the landmark Supreme Court cases interpreting them, and articulates a set of standards that address the unique challenges posed by algorithmic tools. The proposed framework (1) uses tests of reasonableness in disparate impact analysis in place of tests of statistical significance, which will become less and less meaningful in the age of Big Data; (2) requires employers to satisfy a modified form of the business necessity defense when an algorithmic tool has a disparate impact on a protected group; and (3) allows employers to use novel machine-learning techniques to prevent disparate impacts from arising without exposing themselves to disparate treatment liability.

While I was wrapping up that article, a new textbook came to my attention: Law As Data, a compilation of essays edited by Michael Livermore and Daniel Rockmore that looks at various application of data analysis in law. I haven’t read the entire tract, but the introduction explaining the philosophy that underlies the treatise troubled me; it is probably the most visible work yet that posits that law is nothing more than a set of of formal rules and logic that is essentially a “problem” that AI could theoretically “solve” using computation (as with checkers), or, at least, something on which machines will soon be able to “outperform” humans (as in Chess and Go). But there are many moral and philosophical objections with this view of what law is. My next substantive post will more deeply explore those issues.

In the meantime, the most significant AI-related legislation from the past few months was the enactment of a California law restricting the use of deepfakes in political messaging in the run-up to elections. I made appearances speaking about the bill on KPCC’s Airtalk both before and after it was enacted, appearing first alongside Eugene Volokh and then alongside Erwin Chemerinsky. Short version of my take: the new law might have some deterrent effect in the context of California state and local elections, but I think that the social media platforms on which deepfakes can be distributed are better-positioned to “regulate” the spread of deepfakes than individual state or even national governments are. The question is whether they will do so.

Revisiting an old series of posts: Algorithmic entities

A couple years ago, I wrote a series of blog posts challenging Shawn Bayern’s theory that current business-entity laws allow for the creation of “zero-member LLCs” and similar entity structures under which an AI system can be given the functional equivalent of legal personhood. I wove those posts together with my “Digital Analogues” series of posts discussing the legal frameworks that I think could be applied to autonomous systems to form the core of an article that the Nevada Law Journal published last year.

Bayern responded to my critique in an article posted a couple weeks back and, in the interest of fairness and completeness, I am linking to that article here. Unsurprisingly, Bayern takes issue with my criticisms of his arguments. Bayern’s new article includes solid arguments against my construction of the individual statutes and model law his theory relies on, although I still think that a holistic reading of most LLC statutes (particularly New York’s, which was the focus of his first article on the subject) would lead a court to conclude that a LLC would cease to exist once it has no members and no plan to give it new members in the immediate future. Bayern also points out, as I acknowledged in the original blog post, that there may be other mechanisms besides a memberless LLC under New York Law that could provide even-more-difficult-to-stop routes to setting up an AI-controlled legal entity, such as a cross-ownership structure where two AI-controlled LLCs are set up, with each having the other LLC as its sole member. So stopping potential structures for autonomous entities could end up being a game of whack-a-mole. To be clear, Bayern does not suggest that algorithmic entities would be a Good Thing; he just thinks they would be hard to stop even under currently existing statutes.

I won’t respond in substance further, both because I think the argument has largely played itself out and because there is not much I can write now that will do much to further advance my main objective in engaging this issue in the first place. As I said in the final post on Bayern’s theory, a major (and really, the major) reason I engaged on this issue is my fear that an unscrupulous entrepreneur will read his articles and decide to try and form an AI-controlled entity. I wrapped up those posts by saying “Hopefully, a person who comes across Bayern’s article will now come across these posts as well and realize that following Bayern’s roadmap to AI personhood would entail running into more than a few roadblocks.” That’s still my position–business entity laws were written under the assumption that humans will be pulling the strings, and examining the full text and context of a law will, I think, always reveal provisions that undercut (often quite severely) an argument that the particular law can be used to imbue an AI system with personhood. Hopefully, if and when someone tries to set up an “autonomous entity,” my articles will provide an alternative legal roadmap that courts and harmed parties can use to challenge (successfully, I hope) its existence.

The Intelligence is Artificial. The Bias Isn’t.

In 2002, the Wilmington, Delaware police department made national news when it decided to employ a new technique – “jump out squads.”  The police would drive around the city in vans, jump out in high crime areas, and take pictures of young people.  The officers engaged in these impromptu photo sessions to create a database of future criminals. 

If this plan sounds offensive, imagine if it were aided by facial recognition technology or other forms of artificial intelligence. 

Now, seventeen years after the Wilmington Police used vans and Polaroids, police have artificial intelligence at their disposal.  Police departments use AI in a variety of ways and for a variety of purposes.  Crime forecasting – also known as predictive policing – has been used by police in New York, Los Angeles, and Chicago.  Video and image analysis are used by many departments.  While AI might make law enforcement easier, the legal profession needs to keep a careful eye to make sure that AI doesn’t compound the disparities that already exist in criminal justice and other areas of the legal system.

AI and bias – Or, How AI Misses the Picture

Facial recognition and other types of AI may seem innocuous.  After all, every human has the same basic body and face.  But when AI technologies are used to classify people of different races, trouble often follows.

» Read more

The OECD Principles, the AI Initiative Act, and introducing a new guest contributor

There were a couple significant developments in the AI policy world this week. First, the Organization for Economic Co-operation and Development (OECD) adopted and published its “Principles on AI.” That same day, a bipartisan trio of Senators introduced the Artificial Intelligence Initiative Act (AI-IA) (link to PDF of bill), which would establish a national AI strategy in the United States comparable to those adopted by Germany, Japan, France, South Korea, and China.

» Read more

A Look at Law & AI in 2018

AI was busy in 2018.  With the year coming to a close, let’s look at a three important developments in law and AI, and consider what they might imply for the coming year.

The Regulation Debate

Perhaps the biggest issue facing law and AI can be broadly put as “regulation.”  More precisely, will governments regulate AI, and if so, how? This overarching question permeates the field and touches many different specific issues.

The United States government has been reluctant to regulate AI.  Last month, at the FCC’s “Forum on Artificial Intelligence and Machine Learning,” FCC Chairman Ajit Pai stated that the government should exercise “regulatory humility” when dealing with AI.  In other words, a hands-off approach. The reason, he said, is that “early [regulatory] intervention can forestall or even foreclose certain paths to innovation.”

» Read more

Quo vadis, AI?

 

For years, both the media and the business world have been captivated by the seemingly breathtaking pace of progress in artificial intelligence.  It’s been 21 years since Deep Blue beat Kasparov, and more than 7 years since Watson mopped the floor with Ken Jennings and Brad Rutter on Jeopardy.  The string of impressive advances has only seemed to accelerate since then, from the increasing availability of autonomous features in vehicles to rapid improvements in computer translation and promised breakthroughs in medicine and law. The notion that AI is going to revolutionize every aspect our lives took on the characteristics of gospel in business and tech journals.

But another trend has been slowly building in the background–namely, instances where AI has failed (sometimes quite spectacularly) to live up to its billing.  In 2016, some companies were predicting that fully autonomous cars would be available within 4 years.  Today, I get the sense that if you asked most watchers of the industry to give an over/under on whether fully autonomous vehicles will be on the road within 4 years, many-to-most would take the “over” in a heartbeat.  This is in part due to regulatory hurdles, no doubt, but a substantial part of it is also that the technology just isn’t “there” yet, particularly given the need to integrate AVs into a transportation system dominated by unpredictable human drivers.  The early returns on a widely-touted promise of an AI-powered revolution in cancer treatment are no better.

These are not the first time examples of technology failing to live up to its hype, of course.  AI itself has gone through several hype cycles, with “AI winters” bludgeoning the AI industry and all but ending funding for AI research in both the mid-1970s and late 1980s.  In each instance, the winters were preceded by periods of overheated investment in the AI industry and overheated predictions about the arrival of human-level intelligence.

» Read more

Court Treatment Of Artificial Intelligence: Predictive Coding

This post is the first in a planned series about how courts treat artificial intelligence (AI). Advances in AI seemingly happen on a daily basis. AI pioneer Andrew Ng fondly says that AI “is the new electricity.” Earlier this year, consulting firm McKinsey & Company estimated that it could annually create several trillion dollars in value to businesses. There is little doubt AI is becoming pervasive.

Yet court opinions involving AI are relatively sparse. With the rapid growth of AI, courts increasingly will be called upon to adjudicate related issues. Thus, the time is ripe for discussing how courts treat AI.

We will start with “predictive coding.”

What Is Predictive Coding?

Predictive coding–also known as “computer-assisted coding” or “technology-assisted review”–is the area where courts most often deal with AI. In this previous post (well worth reading), Matt discussed predictive coding in the context of whether it will complement human attorneys or replace them.

So what exactly is it? Broadly speaking, predictive coding is an AI application that helps lawyers review records in litigation. Parties to litigation must produce to their opponents reasonably available documents, including electronically stored information (ESI), that are otherwise discoverable. In complex cases, the amount of potentially relevant ESI can exceed what any human could manually review. Complicating this, parties often disagree about what ESI records should be produced, and how to find them.

Enter AI. In predictive coding, knowledgeable attorneys first review a small sample of the universe of records and label each record in the sample. For instance, attorneys might label whether the record is responsive to a discovery request, and whether it is covered by attorney-client privilege. The next step is where AI shines: Given a sufficient sample set labeled by attorneys, predictive coding uses AI to predict the appropriate labels for the remaining universe of records. And it can do so with great accuracy.

But predictive coding is not perfect. In some cases, mistakes have caused significant amounts of responsive documents to be missed. Further, the exact manner of implementing predictive coding varies by vendor. It is not surprising, then, that disagreements arise about the propriety and parameters of predictive coding.

This post serves as a high-level introduction to predictive coding. This is an evolving topic, and in future posts, I plan to provide updates and dive deeper into specific subtopics where appropriate.

When Do Courts Allow Predictive Coding?

A federal magistrate judge in New York (now senior counsel at an international law firm), Andrew J. Peck, paved the way for predictive coding in litigation. He authored multiple court opinions on the topic, starting with his seminal decision in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012). That case explicitly “recognize[d] that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” Id. at 183.

Predictive coding got another boost in a tax case in 2014, when a court rejected a party’s argument that predictive coding is “unproven technology.”  Dynamo Holdings Ltd. P’ship v. Comm’r of Internal Revenue, 143 T.C. 183, 2014 WL 4636526 (2014). The court held:

Where, as here, petitioners reasonably request to use predictive coding to conserve time and expense, and represent to the Court that they will retain electronic discovery experts to meet with respondent’s counsel or his experts to conduct a search acceptable to respondent, we see no reason petitioners should not be allowed to use predictive coding to respond to respondent’s discovery request.

143 T.C. at 192.

Other courts followed suit. Indeed, one Delaware court even unilaterally stated that the parties should use it, though the court eventually softened its position. EORHB, Inc. v. HOA Holdings LLC, No. 7409-VCL, 2013 WL 1960621 (Del. Ch., May 6, 2013). Court approval of predictive coding in civil cases became so widespread that Judge Peck stated in 2015, “the case law has developed to the point that it is now black letter law that where the producing party wants to utilize [predictive coding] for document review, courts will permit it.” Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015).

What Are The Limits To Predictive Coding?

While it has gained acceptance, predictive coding has some limits.

First, courts generally will not force unwilling parties to use predictive coding. In one case, Judge Peck refused to compel a defendant to search for documents using predictive coding, when the defendant preferred to use keyword searching. Hyles v. New York City, 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114, at *2-3 (S.D.N.Y. Aug. 1, 2016). The court reasoned that the party responding to discovery requests is generally best situated to decide how, exactly, it should search for relevant ESI. Id. at *3. As another court stated, “[t]he few courts that have considered this issue have all declined to compel predictive coding.” In re Viagra Products Liability Litig., 16-md-02691-RS (SK) (N.D. Cal. Oct. 14, 2016) (citing Hyles).

Courts also may reject a party’s attempt to use predictive coding when the same party had previously agreed to use other methods of reviewing records. For instance, a court refused a proposal to use predictive coding where (1) the parties had agreed to a different search method, (2) the proposing party failed to comply with recommended “best practices” for using the software, and (3) the proposal “lack[ed] transparency and cooperation regarding the search methodologies applied.” Progressive Cas. Ins. Co. v. Delaney, No. 2:11-cv-00678-LRH-PAL, 2014 WL 3563467, at *8 (D. Nev. July 18, 2014). But a separate court reached a different conclusion and approved a plaintiff’s use of predictive coding, despite the parties’ previous agreement to use different search methods. Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *3 (M.D. Tenn., July 24, 2014). The court recognized that it was, “to some extent, allowing Plaintiff to switch horses in midstream.” It thus ordered the plaintiff to “provide the seed documents they are initially using to set up predictive coding,” and indicated that the defendant could also “switch[] to predictive coding if they believe it would…be more efficient….”

Another limitation is that using predictive coding is not practical in every case. Many routine cases have a limited universe of documents that lawyers can manually review. And straightforward disputes over small amounts typically do not justify the budget needed to hire predictive coding vendors.

Further, it is unclear whether courts will permit predictive coding in criminal matters. There are certainly complex criminal cases with a sprawling universe of records, where manual review is impossible or impracticable. In these cases, either the government or the defendants may seek to use predictive coding. The issues raised in these circumstances could get thorny, and may be worthy of a separate post. See United States v. Comprehensive Drug Testing, Inc., 621 F.3d 1162, 1177 (2010) (noting that large volumes of ESI in criminal cases implicates the need to strike “the right balance between the government’s interest in law enforcement” and defendants’ rights).

Conclusion

The use of AI in litigation is growing, and this is particularly evident in predictive coding. Courts universally accept that AI can help parties categorize documents in large collections of data. We are keeping our fingers on the pulse of predictive coding, and will let you know about important new developments in this area.

Finally, my first post here wouldn’t be complete without thanks to my friend Matt Scherer for the chance to join this exciting blog. Matt has provided valuable, cutting-edge insights into the intersection of law and AI. As a lawyer, entrepreneur, and AI programmer, I hope to add to this discussion.

Grand Reopening

Law and AI returns today with a vengeance.  Today’s post has two purposes: (1) to let you know about some recent developments in my AI-related professional life; and more importantly, (2) to introduce you to Law and AI’s second contributing author, Joe Wilbert.


I’ll tackle the second item first.  Joe and I were classmates and friends in law school.  He was Lead Articles Editor while I was Editor-in-Chief of The Georgetown Journal of Legal Ethics.  Joe served as a federal judicial clerk after graduation, worked in private practice for several years, and eventually took on an interest in the intersection between law and artificial intelligence.  He now heads up a stealth-mode startup where he is developing legal technology that uses machine learning and natural language processing to improve several aspects of litigation.  You can read more about Joe’s background on the About page.

As it happens, Joe and I have both worked over the past couple years to teach ourselves about the technical side of AI through online courses and self-study.  Joe’s focus has been on the programming side while mine has been on mathematics and statistical modeling, but both of us share the goal of gaining a deeper understanding of the subject matter we write about.  Increasingly, we both are applying our self-education in our day jobs–Joe full-time through his start-up, and me through my steadily building work with Littler’s data analytics and Robotics and AI practice groups.  Together, we will bring complementary perspectives on the increasingly busy intersection between AI and law.


Turning to a couple other developments in my work on law and AI, I’ve spent much of the past several months working with Littler’s Workplace Policy Institute on an initiative to help employers and workers manage the labor-market disruptions that AI and other automation technologies are likely to bring in the coming years.  You can read our report–“The Future is Now: Workforce Opportunities and The Coming TIDE”–here.  TIDE means “technology-induced displacement of employees,” the term that Littler uses to refer to the millions of workers who an increasing number of studies warn will be forced to switch occupational categories in the coming years due to automation.

I also just posted a new law review article on SSRN, inspired by (and often borrowing from) several posts on this blog addressing the issue of the legal status of autonomous systems, including the possibility of AI personhood (spoiler alert: bad idea).


The pace of new blog posts will pick up a bit from hereon out.  The goal is for Law and AI to have at least 1 new blog post per month going forward, and hopefully more.  Stay tuned.

Mr. Rogers and the navigating the ethical crossroads of emerging technologies

Fred Rogers may seem like a strange subject for an AI-related blog post, but bear with me.  Everyone knows Mr. Rogers from his long-running PBS show, Mister Rogers’ Neighborhood.  Fewer people know that he was an ordained Presbyterian minister prior to his television career.  And fewer still know the quality of Fred Rogers that led me to write this post: namely, that he was a technological visionary.

You see, television was in its infancy when Mr. Rogers completed his education and was attempting to decide what to do with his life.  When he saw television for the first time, he immediately recognized the new medium’s potential, both good and ill. The programming that greeted him definitely fell into the latter category. As he later recounted, with what is probably the closest Mr. Rogers ever came to betraying frustration and annoyance, “there were people throwing pies at one another.” In other interviews, he expressed dismay at the number of cartoons aimed at children that used violence as a method of entertainment.

» Read more

Facing the rising TIDE: The labor market policy implications of automation

Source: Dilbert


So about a month ago, the Department of Labor released a draft strategic plan for the next 5 fiscal years.  Curiously, the 40-page document made no mention of the impact of automation, which poses perhaps the greatest policy challenge that the labor market has seen since the DOL was formed 105 years ago.  So I teamed up with several other attorneys at my firm and Prime Policy Group–with input from several participants in a robotics and AI roundtable that my firm hosted in DC last month–to write an open letter to the Secretary of Labor explaining why automated systems need more attention than they currently receive.

The Cliff’s Notes version of the comments is this sentence from the intro:

[T]he Department of Labor, in cooperation with other government agencies and private industry, should take proactive steps to provide American workers with the skills necessary to participate in the labor market for these emerging technologies, which promise to revolutionize the global economy and labor market during the coming years, and to implement measures designed to ensure that workers whose jobs are vulnerable to automation are not left behind.

We came up with a catchy acronym for the labor market disruption that automation causes: technology-induced displacement of employees (TIDE). It wasn’t until I was deep into working on the letter that it truly sunk in what a huge challenge this is going to be. Sadly, governments in developed countries are barely paying attention to these issues right now, despite the fact that automation appears to be right on the cusp of disrupting the labor market in seemingly every industry.

The full comments are available here.

1 2 3 7