There were a couple significant developments in the AI policy world this week. First, the Organization for Economic Co-operation and Development (OECD) adopted and published its “Principles on AI.” That same day, a bipartisan trio of Senators introduced the Artificial Intelligence Initiative Act (AI-IA) (link to PDF of bill), which would establish a national AI strategy in the United States comparable to those adopted by Germany, Japan, France, South Korea, and China.» Read more
AI was busy in 2018. With the year coming to a close, let’s look at a three important developments in law and AI, and consider what they might imply for the coming year.
The Regulation Debate
Perhaps the biggest issue facing law and AI can be broadly put as “regulation.” More precisely, will governments regulate AI, and if so, how? This overarching question permeates the field and touches many different specific issues.
The United States government has been reluctant to regulate AI. Last month, at the FCC’s “Forum on Artificial Intelligence and Machine Learning,” FCC Chairman Ajit Pai stated that the government should exercise “regulatory humility” when dealing with AI. In other words, a hands-off approach. The reason, he said, is that “early [regulatory] intervention can forestall or even foreclose certain paths to innovation.”
For years, both the media and the business world have been captivated by the seemingly breathtaking pace of progress in artificial intelligence. It’s been 21 years since Deep Blue beat Kasparov, and more than 7 years since Watson mopped the floor with Ken Jennings and Brad Rutter on Jeopardy. The string of impressive advances has only seemed to accelerate since then, from the increasing availability of autonomous features in vehicles to rapid improvements in computer translation and promised breakthroughs in medicine and law. The notion that AI is going to revolutionize every aspect our lives took on the characteristics of gospel in business and tech journals.
But another trend has been slowly building in the background–namely, instances where AI has failed (sometimes quite spectacularly) to live up to its billing. In 2016, some companies were predicting that fully autonomous cars would be available within 4 years. Today, I get the sense that if you asked most watchers of the industry to give an over/under on whether fully autonomous vehicles will be on the road within 4 years, many-to-most would take the “over” in a heartbeat. This is in part due to regulatory hurdles, no doubt, but a substantial part of it is also that the technology just isn’t “there” yet, particularly given the need to integrate AVs into a transportation system dominated by unpredictable human drivers. The early returns on a widely-touted promise of an AI-powered revolution in cancer treatment are no better.
These are not the first time examples of technology failing to live up to its hype, of course. AI itself has gone through several hype cycles, with “AI winters” bludgeoning the AI industry and all but ending funding for AI research in both the mid-1970s and late 1980s. In each instance, the winters were preceded by periods of overheated investment in the AI industry and overheated predictions about the arrival of human-level intelligence.
This post is the first in a planned series about how courts treat artificial intelligence (AI). Advances in AI seemingly happen on a daily basis. AI pioneer Andrew Ng fondly says that AI “is the new electricity.” Earlier this year, consulting firm McKinsey & Company estimated that it could annually create several trillion dollars in value to businesses. There is little doubt AI is becoming pervasive.
Yet court opinions involving AI are relatively sparse. With the rapid growth of AI, courts increasingly will be called upon to adjudicate related issues. Thus, the time is ripe for discussing how courts treat AI.
We will start with “predictive coding.”
What Is Predictive Coding?
Predictive coding–also known as “computer-assisted coding” or “technology-assisted review”–is the area where courts most often deal with AI. In this previous post (well worth reading), Matt discussed predictive coding in the context of whether it will complement human attorneys or replace them.
So what exactly is it? Broadly speaking, predictive coding is an AI application that helps lawyers review records in litigation. Parties to litigation must produce to their opponents reasonably available documents, including electronically stored information (ESI), that are otherwise discoverable. In complex cases, the amount of potentially relevant ESI can exceed what any human could manually review. Complicating this, parties often disagree about what ESI records should be produced, and how to find them.
Enter AI. In predictive coding, knowledgeable attorneys first review a small sample of the universe of records and label each record in the sample. For instance, attorneys might label whether the record is responsive to a discovery request, and whether it is covered by attorney-client privilege. The next step is where AI shines: Given a sufficient sample set labeled by attorneys, predictive coding uses AI to predict the appropriate labels for the remaining universe of records. And it can do so with great accuracy.
But predictive coding is not perfect. In some cases, mistakes have caused significant amounts of responsive documents to be missed. Further, the exact manner of implementing predictive coding varies by vendor. It is not surprising, then, that disagreements arise about the propriety and parameters of predictive coding.
This post serves as a high-level introduction to predictive coding. This is an evolving topic, and in future posts, I plan to provide updates and dive deeper into specific subtopics where appropriate.
When Do Courts Allow Predictive Coding?
A federal magistrate judge in New York (now senior counsel at an international law firm), Andrew J. Peck, paved the way for predictive coding in litigation. He authored multiple court opinions on the topic, starting with his seminal decision in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012). That case explicitly “recognize[d] that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” Id. at 183.
Predictive coding got another boost in a tax case in 2014, when a court rejected a party’s argument that predictive coding is “unproven technology.” Dynamo Holdings Ltd. P’ship v. Comm’r of Internal Revenue, 143 T.C. 183, 2014 WL 4636526 (2014). The court held:
Where, as here, petitioners reasonably request to use predictive coding to conserve time and expense, and represent to the Court that they will retain electronic discovery experts to meet with respondent’s counsel or his experts to conduct a search acceptable to respondent, we see no reason petitioners should not be allowed to use predictive coding to respond to respondent’s discovery request.
143 T.C. at 192.
Other courts followed suit. Indeed, one Delaware court even unilaterally stated that the parties should use it, though the court eventually softened its position. EORHB, Inc. v. HOA Holdings LLC, No. 7409-VCL, 2013 WL 1960621 (Del. Ch., May 6, 2013). Court approval of predictive coding in civil cases became so widespread that Judge Peck stated in 2015, “the case law has developed to the point that it is now black letter law that where the producing party wants to utilize [predictive coding] for document review, courts will permit it.” Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015).
What Are The Limits To Predictive Coding?
While it has gained acceptance, predictive coding has some limits.
First, courts generally will not force unwilling parties to use predictive coding. In one case, Judge Peck refused to compel a defendant to search for documents using predictive coding, when the defendant preferred to use keyword searching. Hyles v. New York City, 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114, at *2-3 (S.D.N.Y. Aug. 1, 2016). The court reasoned that the party responding to discovery requests is generally best situated to decide how, exactly, it should search for relevant ESI. Id. at *3. As another court stated, “[t]he few courts that have considered this issue have all declined to compel predictive coding.” In re Viagra Products Liability Litig., 16-md-02691-RS (SK) (N.D. Cal. Oct. 14, 2016) (citing Hyles).
Courts also may reject a party’s attempt to use predictive coding when the same party had previously agreed to use other methods of reviewing records. For instance, a court refused a proposal to use predictive coding where (1) the parties had agreed to a different search method, (2) the proposing party failed to comply with recommended “best practices” for using the software, and (3) the proposal “lack[ed] transparency and cooperation regarding the search methodologies applied.” Progressive Cas. Ins. Co. v. Delaney, No. 2:11-cv-00678-LRH-PAL, 2014 WL 3563467, at *8 (D. Nev. July 18, 2014). But a separate court reached a different conclusion and approved a plaintiff’s use of predictive coding, despite the parties’ previous agreement to use different search methods. Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *3 (M.D. Tenn., July 24, 2014). The court recognized that it was, “to some extent, allowing Plaintiff to switch horses in midstream.” It thus ordered the plaintiff to “provide the seed documents they are initially using to set up predictive coding,” and indicated that the defendant could also “switch to predictive coding if they believe it would…be more efficient….”
Another limitation is that using predictive coding is not practical in every case. Many routine cases have a limited universe of documents that lawyers can manually review. And straightforward disputes over small amounts typically do not justify the budget needed to hire predictive coding vendors.
Further, it is unclear whether courts will permit predictive coding in criminal matters. There are certainly complex criminal cases with a sprawling universe of records, where manual review is impossible or impracticable. In these cases, either the government or the defendants may seek to use predictive coding. The issues raised in these circumstances could get thorny, and may be worthy of a separate post. See United States v. Comprehensive Drug Testing, Inc., 621 F.3d 1162, 1177 (2010) (noting that large volumes of ESI in criminal cases implicates the need to strike “the right balance between the government’s interest in law enforcement” and defendants’ rights).
The use of AI in litigation is growing, and this is particularly evident in predictive coding. Courts universally accept that AI can help parties categorize documents in large collections of data. We are keeping our fingers on the pulse of predictive coding, and will let you know about important new developments in this area.
Finally, my first post here wouldn’t be complete without thanks to my friend Matt Scherer for the chance to join this exciting blog. Matt has provided valuable, cutting-edge insights into the intersection of law and AI. As a lawyer, entrepreneur, and AI programmer, I hope to add to this discussion.
Law and AI returns today with a vengeance. Today’s post has two purposes: (1) to let you know about some recent developments in my AI-related professional life; and more importantly, (2) to introduce you to Law and AI’s second contributing author, Joe Wilbert.
I’ll tackle the second item first. Joe and I were classmates and friends in law school. He was Lead Articles Editor while I was Editor-in-Chief of The Georgetown Journal of Legal Ethics. Joe served as a federal judicial clerk after graduation, worked in private practice for several years, and eventually took on an interest in the intersection between law and artificial intelligence. He now heads up a stealth-mode startup where he is developing legal technology that uses machine learning and natural language processing to improve several aspects of litigation. You can read more about Joe’s background on the About page.
As it happens, Joe and I have both worked over the past couple years to teach ourselves about the technical side of AI through online courses and self-study. Joe’s focus has been on the programming side while mine has been on mathematics and statistical modeling, but both of us share the goal of gaining a deeper understanding of the subject matter we write about. Increasingly, we both are applying our self-education in our day jobs–Joe full-time through his start-up, and me through my steadily building work with Littler’s data analytics and Robotics and AI practice groups. Together, we will bring complementary perspectives on the increasingly busy intersection between AI and law.
Turning to a couple other developments in my work on law and AI, I’ve spent much of the past several months working with Littler’s Workplace Policy Institute on an initiative to help employers and workers manage the labor-market disruptions that AI and other automation technologies are likely to bring in the coming years. You can read our report–“The Future is Now: Workforce Opportunities and The Coming TIDE”–here. TIDE means “technology-induced displacement of employees,” the term that Littler uses to refer to the millions of workers who an increasing number of studies warn will be forced to switch occupational categories in the coming years due to automation.
I also just posted a new law review article on SSRN, inspired by (and often borrowing from) several posts on this blog addressing the issue of the legal status of autonomous systems, including the possibility of AI personhood (spoiler alert: bad idea).
The pace of new blog posts will pick up a bit from hereon out. The goal is for Law and AI to have at least 1 new blog post per month going forward, and hopefully more. Stay tuned.
Fred Rogers may seem like a strange subject for an AI-related blog post, but bear with me. Everyone knows Mr. Rogers from his long-running PBS show, Mister Rogers’ Neighborhood. Fewer people know that he was an ordained Presbyterian minister prior to his television career. And fewer still know the quality of Fred Rogers that led me to write this post: namely, that he was a technological visionary.
You see, television was in its infancy when Mr. Rogers completed his education and was attempting to decide what to do with his life. When he saw television for the first time, he immediately recognized the new medium’s potential, both good and ill. The programming that greeted him definitely fell into the latter category. As he later recounted, with what is probably the closest Mr. Rogers ever came to betraying frustration and annoyance, “there were people throwing pies at one another.” In other interviews, he expressed dismay at the number of cartoons aimed at children that used violence as a method of entertainment.
So about a month ago, the Department of Labor released a draft strategic plan for the next 5 fiscal years. Curiously, the 40-page document made no mention of the impact of automation, which poses perhaps the greatest policy challenge that the labor market has seen since the DOL was formed 105 years ago. So I teamed up with several other attorneys at my firm and Prime Policy Group–with input from several participants in a robotics and AI roundtable that my firm hosted in DC last month–to write an open letter to the Secretary of Labor explaining why automated systems need more attention than they currently receive.
The Cliff’s Notes version of the comments is this sentence from the intro:
[T]he Department of Labor, in cooperation with other government agencies and private industry, should take proactive steps to provide American workers with the skills necessary to participate in the labor market for these emerging technologies, which promise to revolutionize the global economy and labor market during the coming years, and to implement measures designed to ensure that workers whose jobs are vulnerable to automation are not left behind.
We came up with a catchy acronym for the labor market disruption that automation causes: technology-induced displacement of employees (TIDE). It wasn’t until I was deep into working on the letter that it truly sunk in what a huge challenge this is going to be. Sadly, governments in developed countries are barely paying attention to these issues right now, despite the fact that automation appears to be right on the cusp of disrupting the labor market in seemingly every industry.
The full comments are available here.
The biggest “algorithms in the news” story of the past couple months has been whether Facebook, Twitter, and Google’s ad-targeting algorithms facilitated, however inadvertently, Russian interference in the 2016 United States Presidential election. For those who have been sleeping under a rock, hundreds of thousands of targeted advertisements containing links to fake political “news” stories were delivered to users of the three behemoths’ social media and web services. Many of the ads were microtargeted–specifically aimed to reach specific voters in specific geographic regions.
This story–which has been bubbling under the surface for months–came to the forefront this past week as executives from the three companies were hauled in front of a Congressional committee and grilled about whether they were responsible for (or, at the very least, whether they did enough to stop) the spread of Russian misinformation. The Economist’s cover story this week is on “Social media’s threat to democracy,” complete with a cover image of a human hand wielding Facebook’s iconic “f” like a gun, complete with smoke drifting off the end of the “barrel” (see below).
I’ll start this week with a couple personal updates before moving onto the biggest A.I. policy news item of the week–although the news is not quite as exciting as some media headlines have made it sound.
Personal Item #1: New gig
A few weeks ago, I switched jobs and joined a new law firm–Littler Mendelson, P.C.. In addition to being the world’s largest labor and employment law firm, Littler has a Robotics, A.I., and Automation practice group that Garry Mathiason (a legend in both employment law and robotics law) started a few years back. I’ve hit the ground running with both the firm and its practice group during the past few weeks, and my busy stretch will continue for awhile. I’ll try to make updates as much as I can, particularly when big A.I. law and policy news hits, but updates will likely be on the light side for the next several weeks.
Personal Item #2: O’Reilly A.I. Conference
Next week, I’ll be presenting at the O’Reilly A.I. Conference in San Francisco along with Danny Guillory, the head of Global Diversity and Inclusion at Autodesk. Our presentation is titled “Building an Unbiased A.I.: End-to-end diversity and inclusion in AI development.” If you’ll be at the conference, come check it out.
Personal Item #3: Drone Law Today
One last personal item–I made my second appearance on Steve Hogan’s Drone Law Today podcast. Steve and I had a fascinating conversation on the possibility of legal personhood for A.I.–both how feasible personhood is now (not very) and how society will react if and when robots do eventually start walking amongst us. This was one of the most fun and interesting conversations I’ve ever had, so check it out.
A.I. policy news: House passes SELF DRIVE Act
I’ll close with the big news item relating to A.I. policy–the U.S. House of Representatives’ passage of the SELF DRIVE Act. The bill, as its title suggests, would open up the way for self-driving cars to hit the road without having to comply with NHTSA regulations–which otherwise would present a major hurdle to the deployment of autonomous vehicles.
The Senate is also considering self-driving car legislation, and that legislation apparently differs from the House bill quite dramatically. That means that the two houses will have to reconcile their respective bills in conference, and observers of American politics (and watchers of Schoolhouse Rock) know that the bill that emerges from conference may end up looking nothing like either of the original bills. Passage of the Senate bill sounds highly likely, although congressional gridlock means that it’s still possible the bill will not come up for a vote this year. We’ll see what (if anything) emerges from the Senate, at which point we’ll hopefully have a better sense of what the final law will look like.
This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence. The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence. Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.
Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics. Here’s Etzioni’s trio:
- “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.” For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
- “[A]n A.I. system must clearly disclose that it is not human.”
- “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”
These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing. But that’s the easy part of deciding how to regulate A.I. The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing. Should we hold the designer(s) of the A.I. system accountable? Or the immediate operator? Or maybe the system itself? No one will argue with the point that an autonomous car shouldn’t run red lights. It’s less clear who should be held responsible when it does.
Etzioni’s op-ed takes no discernible position on these issues. The first rule seems to imply that the A.I. system itself should be held responsible. But since A.I. systems are not legal persons, that’s a legal impossibility at present. And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible. The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”
I don’t have any issue with where Etzioni wants us to go. I’m just not sure how he thinks we’re supposed to get there.