Applying Old Rules to New Tools (and other updates)

My latest scholarly article, this one co-authored with Littler shareholders Marko Mrkonich and Allan King, is now available on SSRN and will be published in the South Carolina Law Review this winter. Here is the abstract:

Companies, policymakers, and scholars alike are paying increasing attention to algorithmic recruitment and hiring tools that leverage artificial intelligence, machine learning, and Big Data. To its advocates, algorithmic employee selection processes can be more effective in choosing the strongest candidates, increasing diversity, and reducing the influence of human prejudices. Many observers, however, express concern about other forms of bias that can infect algorithmic selection procedures, leading to fears regarding the potential for algorithms to create unintended discriminatory effects or mask more deliberate forms of discrimination. This article represents the most comprehensive analysis to date of the legal, ethical, and practical challenges associated with using these tools.

The article begins with background on both the nature of algorithmic selection tools and the legal backdrop of antidiscrimination laws. It then breaks down the key reasons why employers, courts and policymakers will struggle to fit these tools within the existing legal framework. These challenges include algorithmic tools’ reliance on correlation; the opacity of models generated by many algorithmic selection tools; and the difficulty in fitting algorithmic tools into a legal framework developed for the employee selection tools of the mid-20th century.

The article concludes with a comprehensive proposed legal framework that weaves together the usually separate analyses of disparate treatment and disparate impact. It takes the fundamental principles of antidiscrimination laws, and the landmark Supreme Court cases interpreting them, and articulates a set of standards that address the unique challenges posed by algorithmic tools. The proposed framework (1) uses tests of reasonableness in disparate impact analysis in place of tests of statistical significance, which will become less and less meaningful in the age of Big Data; (2) requires employers to satisfy a modified form of the business necessity defense when an algorithmic tool has a disparate impact on a protected group; and (3) allows employers to use novel machine-learning techniques to prevent disparate impacts from arising without exposing themselves to disparate treatment liability.

While I was wrapping up that article, a new textbook came to my attention: Law As Data, a compilation of essays edited by Michael Livermore and Daniel Rockmore that looks at various application of data analysis in law. I haven’t read the entire tract, but the introduction explaining the philosophy that underlies the treatise troubled me; it is probably the most visible work yet that posits that law is nothing more than a set of of formal rules and logic that is essentially a “problem” that AI could theoretically “solve” using computation (as with checkers), or, at least, something on which machines will soon be able to “outperform” humans (as in Chess and Go). But there are many moral and philosophical objections with this view of what law is. My next substantive post will more deeply explore those issues.

In the meantime, the most significant AI-related legislation from the past few months was the enactment of a California law restricting the use of deepfakes in political messaging in the run-up to elections. I made appearances speaking about the bill on KPCC’s Airtalk both before and after it was enacted, appearing first alongside Eugene Volokh and then alongside Erwin Chemerinsky. Short version of my take: the new law might have some deterrent effect in the context of California state and local elections, but I think that the social media platforms on which deepfakes can be distributed are better-positioned to “regulate” the spread of deepfakes than individual state or even national governments are. The question is whether they will do so.

Revisiting an old series of posts: Algorithmic entities

A couple years ago, I wrote a series of blog posts challenging Shawn Bayern’s theory that current business-entity laws allow for the creation of “zero-member LLCs” and similar entity structures under which an AI system can be given the functional equivalent of legal personhood. I wove those posts together with my “Digital Analogues” series of posts discussing the legal frameworks that I think could be applied to autonomous systems to form the core of an article that the Nevada Law Journal published last year.

Bayern responded to my critique in an article posted a couple weeks back and, in the interest of fairness and completeness, I am linking to that article here. Unsurprisingly, Bayern takes issue with my criticisms of his arguments. Bayern’s new article includes solid arguments against my construction of the individual statutes and model law his theory relies on, although I still think that a holistic reading of most LLC statutes (particularly New York’s, which was the focus of his first article on the subject) would lead a court to conclude that a LLC would cease to exist once it has no members and no plan to give it new members in the immediate future. Bayern also points out, as I acknowledged in the original blog post, that there may be other mechanisms besides a memberless LLC under New York Law that could provide even-more-difficult-to-stop routes to setting up an AI-controlled legal entity, such as a cross-ownership structure where two AI-controlled LLCs are set up, with each having the other LLC as its sole member. So stopping potential structures for autonomous entities could end up being a game of whack-a-mole. To be clear, Bayern does not suggest that algorithmic entities would be a Good Thing; he just thinks they would be hard to stop even under currently existing statutes.

I won’t respond in substance further, both because I think the argument has largely played itself out and because there is not much I can write now that will do much to further advance my main objective in engaging this issue in the first place. As I said in the final post on Bayern’s theory, a major (and really, the major) reason I engaged on this issue is my fear that an unscrupulous entrepreneur will read his articles and decide to try and form an AI-controlled entity. I wrapped up those posts by saying “Hopefully, a person who comes across Bayern’s article will now come across these posts as well and realize that following Bayern’s roadmap to AI personhood would entail running into more than a few roadblocks.” That’s still my position–business entity laws were written under the assumption that humans will be pulling the strings, and examining the full text and context of a law will, I think, always reveal provisions that undercut (often quite severely) an argument that the particular law can be used to imbue an AI system with personhood. Hopefully, if and when someone tries to set up an “autonomous entity,” my articles will provide an alternative legal roadmap that courts and harmed parties can use to challenge (successfully, I hope) its existence.

The OECD Principles, the AI Initiative Act, and introducing a new guest contributor

There were a couple significant developments in the AI policy world this week. First, the Organization for Economic Co-operation and Development (OECD) adopted and published its “Principles on AI.” That same day, a bipartisan trio of Senators introduced the Artificial Intelligence Initiative Act (AI-IA) (link to PDF of bill), which would establish a national AI strategy in the United States comparable to those adopted by Germany, Japan, France, South Korea, and China.

» Read more

Quo vadis, AI?

 

For years, both the media and the business world have been captivated by the seemingly breathtaking pace of progress in artificial intelligence.  It’s been 21 years since Deep Blue beat Kasparov, and more than 7 years since Watson mopped the floor with Ken Jennings and Brad Rutter on Jeopardy.  The string of impressive advances has only seemed to accelerate since then, from the increasing availability of autonomous features in vehicles to rapid improvements in computer translation and promised breakthroughs in medicine and law. The notion that AI is going to revolutionize every aspect our lives took on the characteristics of gospel in business and tech journals.

But another trend has been slowly building in the background–namely, instances where AI has failed (sometimes quite spectacularly) to live up to its billing.  In 2016, some companies were predicting that fully autonomous cars would be available within 4 years.  Today, I get the sense that if you asked most watchers of the industry to give an over/under on whether fully autonomous vehicles will be on the road within 4 years, many-to-most would take the “over” in a heartbeat.  This is in part due to regulatory hurdles, no doubt, but a substantial part of it is also that the technology just isn’t “there” yet, particularly given the need to integrate AVs into a transportation system dominated by unpredictable human drivers.  The early returns on a widely-touted promise of an AI-powered revolution in cancer treatment are no better.

These are not the first time examples of technology failing to live up to its hype, of course.  AI itself has gone through several hype cycles, with “AI winters” bludgeoning the AI industry and all but ending funding for AI research in both the mid-1970s and late 1980s.  In each instance, the winters were preceded by periods of overheated investment in the AI industry and overheated predictions about the arrival of human-level intelligence.

» Read more

Grand Reopening

Law and AI returns today with a vengeance.  Today’s post has two purposes: (1) to let you know about some recent developments in my AI-related professional life; and more importantly, (2) to introduce you to Law and AI’s second contributing author, Joe Wilbert.


I’ll tackle the second item first.  Joe and I were classmates and friends in law school.  He was Lead Articles Editor while I was Editor-in-Chief of The Georgetown Journal of Legal Ethics.  Joe served as a federal judicial clerk after graduation, worked in private practice for several years, and eventually took on an interest in the intersection between law and artificial intelligence.  He now heads up a stealth-mode startup where he is developing legal technology that uses machine learning and natural language processing to improve several aspects of litigation.  You can read more about Joe’s background on the About page.

As it happens, Joe and I have both worked over the past couple years to teach ourselves about the technical side of AI through online courses and self-study.  Joe’s focus has been on the programming side while mine has been on mathematics and statistical modeling, but both of us share the goal of gaining a deeper understanding of the subject matter we write about.  Increasingly, we both are applying our self-education in our day jobs–Joe full-time through his start-up, and me through my steadily building work with Littler’s data analytics and Robotics and AI practice groups.  Together, we will bring complementary perspectives on the increasingly busy intersection between AI and law.


Turning to a couple other developments in my work on law and AI, I’ve spent much of the past several months working with Littler’s Workplace Policy Institute on an initiative to help employers and workers manage the labor-market disruptions that AI and other automation technologies are likely to bring in the coming years.  You can read our report–“The Future is Now: Workforce Opportunities and The Coming TIDE”–here.  TIDE means “technology-induced displacement of employees,” the term that Littler uses to refer to the millions of workers who an increasing number of studies warn will be forced to switch occupational categories in the coming years due to automation.

I also just posted a new law review article on SSRN, inspired by (and often borrowing from) several posts on this blog addressing the issue of the legal status of autonomous systems, including the possibility of AI personhood (spoiler alert: bad idea).


The pace of new blog posts will pick up a bit from hereon out.  The goal is for Law and AI to have at least 1 new blog post per month going forward, and hopefully more.  Stay tuned.

Mr. Rogers and the navigating the ethical crossroads of emerging technologies

Fred Rogers may seem like a strange subject for an AI-related blog post, but bear with me.  Everyone knows Mr. Rogers from his long-running PBS show, Mister Rogers’ Neighborhood.  Fewer people know that he was an ordained Presbyterian minister prior to his television career.  And fewer still know the quality of Fred Rogers that led me to write this post: namely, that he was a technological visionary.

You see, television was in its infancy when Mr. Rogers completed his education and was attempting to decide what to do with his life.  When he saw television for the first time, he immediately recognized the new medium’s potential, both good and ill. The programming that greeted him definitely fell into the latter category. As he later recounted, with what is probably the closest Mr. Rogers ever came to betraying frustration and annoyance, “there were people throwing pies at one another.” In other interviews, he expressed dismay at the number of cartoons aimed at children that used violence as a method of entertainment.

» Read more

Facing the rising TIDE: The labor market policy implications of automation

Source: Dilbert


So about a month ago, the Department of Labor released a draft strategic plan for the next 5 fiscal years.  Curiously, the 40-page document made no mention of the impact of automation, which poses perhaps the greatest policy challenge that the labor market has seen since the DOL was formed 105 years ago.  So I teamed up with several other attorneys at my firm and Prime Policy Group–with input from several participants in a robotics and AI roundtable that my firm hosted in DC last month–to write an open letter to the Secretary of Labor explaining why automated systems need more attention than they currently receive.

The Cliff’s Notes version of the comments is this sentence from the intro:

[T]he Department of Labor, in cooperation with other government agencies and private industry, should take proactive steps to provide American workers with the skills necessary to participate in the labor market for these emerging technologies, which promise to revolutionize the global economy and labor market during the coming years, and to implement measures designed to ensure that workers whose jobs are vulnerable to automation are not left behind.

We came up with a catchy acronym for the labor market disruption that automation causes: technology-induced displacement of employees (TIDE). It wasn’t until I was deep into working on the letter that it truly sunk in what a huge challenge this is going to be. Sadly, governments in developed countries are barely paying attention to these issues right now, despite the fact that automation appears to be right on the cusp of disrupting the labor market in seemingly every industry.

The full comments are available here.

On algorithms and fake news


The biggest “algorithms in the news” story of the past couple months has been whether Facebook, Twitter, and Google’s ad-targeting algorithms facilitated, however inadvertently, Russian interference in the 2016 United States Presidential election.  For those who have been sleeping under a rock, hundreds of thousands of targeted advertisements containing links to fake political “news” stories were delivered to users of the three behemoths’ social media and web services.  Many of the ads were microtargeted–specifically aimed to reach specific voters in specific geographic regions.

This story–which has been bubbling under the surface for months–came to the forefront this past week as executives from the three companies were hauled in front of a Congressional committee and grilled about whether they were responsible for (or, at the very least, whether they did enough to stop) the spread of Russian misinformation.  The Economist’s cover story this week is on “Social media’s threat to democracy,” complete with a cover image of a human hand wielding Facebook’s iconic “f” like a gun, complete with smoke drifting off the end of the “barrel” (see below).

» Read more

Personal updates and the House SELF DRIVE Act


 

I’ll start this week with a couple personal updates before moving onto the biggest A.I. policy news item of the week–although the news is not quite as exciting as some media headlines have made it sound.

Personal Item #1: New gig

A few weeks ago, I switched jobs and joined a new law firm–Littler Mendelson, P.C..  In addition to being the world’s largest labor and employment law firm, Littler has a Robotics, A.I., and Automation practice group that Garry Mathiason (a legend in both employment law and robotics law) started a few years back.  I’ve hit the ground running with both the firm and its practice group during the past few weeks, and my busy stretch will continue for awhile.  I’ll try to make updates as much as I can, particularly when big A.I. law and policy news hits, but updates will likely be on the light side for the next several weeks.

Personal Item #2: O’Reilly A.I. Conference

Next week, I’ll be presenting at the O’Reilly A.I. Conference in San Francisco along with Danny Guillory, the head of Global Diversity and Inclusion at Autodesk.  Our presentation is titled “Building an Unbiased A.I.: End-to-end diversity and inclusion in AI development.”  If you’ll be at the conference, come check it out.

Personal Item #3: Drone Law Today

One last personal item–I made my second appearance on Steve Hogan’s Drone Law Today podcast.  Steve and I had a fascinating conversation on the possibility of legal personhood for A.I.–both how feasible personhood is now (not very) and how society will react if and when robots do eventually start walking amongst us.  This was one of the most fun and interesting conversations I’ve ever had, so check it out.

A.I. policy news: House passes SELF DRIVE Act

I’ll close with the big news item relating to A.I. policy–the U.S. House of Representatives’ passage of the SELF DRIVE Act.  The bill, as its title suggests, would open up the way for self-driving cars to hit the road without having to comply with NHTSA regulations–which otherwise would present a major hurdle to the deployment of autonomous vehicles.

The Senate is also considering self-driving car legislation, and that legislation apparently differs from the House bill quite dramatically.  That means that the two houses will have to reconcile their respective bills in conference, and observers of American politics (and watchers of Schoolhouse Rock) know that the bill that emerges from conference may end up looking nothing like either of the original bills.  Passage of the Senate bill sounds highly likely, although congressional gridlock means that it’s still possible the bill will not come up for a vote this year.  We’ll see what (if anything) emerges from the Senate, at which point we’ll hopefully have a better sense of what the final law will look like.

How to Regulate Artificial Intelligence Without Really Trying


This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence.  The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence.  Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.

Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics.  Here’s Etzioni’s trio:

  1. “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.”  For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
  2. “[A]n A.I. system must clearly disclose that it is not human.”
  3. “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”

These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing.  But that’s the easy part of deciding how to regulate A.I.  The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing.  Should we hold the designer(s) of the A.I. system accountable?  Or the immediate operator?  Or maybe the system itself?  No one will argue with the point that an autonomous car shouldn’t run red lights.  It’s less clear who should be held responsible when it does.

Etzioni’s op-ed takes no discernible position on these issues.  The first rule seems to imply that the A.I. system itself should be held responsible.  But since A.I. systems are not legal persons, that’s a legal impossibility at present.  And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible.  The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”

I don’t have any issue with where Etzioni wants us to go.  I’m just not sure how he thinks we’re supposed to get there.

1 2 3 7