There were a couple significant developments in the AI policy world this week. First, the Organization for Economic Co-operation and Development (OECD) adopted and published its “Principles on AI.” That same day, a bipartisan trio of Senators introduced the Artificial Intelligence Initiative Act (AI-IA) (link to PDF of bill), which would establish a national AI strategy in the United States comparable to those adopted by Germany, Japan, France, South Korea, and China.» Read more
For years, both the media and the business world have been captivated by the seemingly breathtaking pace of progress in artificial intelligence. It’s been 21 years since Deep Blue beat Kasparov, and more than 7 years since Watson mopped the floor with Ken Jennings and Brad Rutter on Jeopardy. The string of impressive advances has only seemed to accelerate since then, from the increasing availability of autonomous features in vehicles to rapid improvements in computer translation and promised breakthroughs in medicine and law. The notion that AI is going to revolutionize every aspect our lives took on the characteristics of gospel in business and tech journals.
But another trend has been slowly building in the background–namely, instances where AI has failed (sometimes quite spectacularly) to live up to its billing. In 2016, some companies were predicting that fully autonomous cars would be available within 4 years. Today, I get the sense that if you asked most watchers of the industry to give an over/under on whether fully autonomous vehicles will be on the road within 4 years, many-to-most would take the “over” in a heartbeat. This is in part due to regulatory hurdles, no doubt, but a substantial part of it is also that the technology just isn’t “there” yet, particularly given the need to integrate AVs into a transportation system dominated by unpredictable human drivers. The early returns on a widely-touted promise of an AI-powered revolution in cancer treatment are no better.
These are not the first time examples of technology failing to live up to its hype, of course. AI itself has gone through several hype cycles, with “AI winters” bludgeoning the AI industry and all but ending funding for AI research in both the mid-1970s and late 1980s. In each instance, the winters were preceded by periods of overheated investment in the AI industry and overheated predictions about the arrival of human-level intelligence.
Law and AI returns today with a vengeance. Today’s post has two purposes: (1) to let you know about some recent developments in my AI-related professional life; and more importantly, (2) to introduce you to Law and AI’s second contributing author, Joe Wilbert.
I’ll tackle the second item first. Joe and I were classmates and friends in law school. He was Lead Articles Editor while I was Editor-in-Chief of The Georgetown Journal of Legal Ethics. Joe served as a federal judicial clerk after graduation, worked in private practice for several years, and eventually took on an interest in the intersection between law and artificial intelligence. He now heads up a stealth-mode startup where he is developing legal technology that uses machine learning and natural language processing to improve several aspects of litigation. You can read more about Joe’s background on the About page.
As it happens, Joe and I have both worked over the past couple years to teach ourselves about the technical side of AI through online courses and self-study. Joe’s focus has been on the programming side while mine has been on mathematics and statistical modeling, but both of us share the goal of gaining a deeper understanding of the subject matter we write about. Increasingly, we both are applying our self-education in our day jobs–Joe full-time through his start-up, and me through my steadily building work with Littler’s data analytics and Robotics and AI practice groups. Together, we will bring complementary perspectives on the increasingly busy intersection between AI and law.
Turning to a couple other developments in my work on law and AI, I’ve spent much of the past several months working with Littler’s Workplace Policy Institute on an initiative to help employers and workers manage the labor-market disruptions that AI and other automation technologies are likely to bring in the coming years. You can read our report–“The Future is Now: Workforce Opportunities and The Coming TIDE”–here. TIDE means “technology-induced displacement of employees,” the term that Littler uses to refer to the millions of workers who an increasing number of studies warn will be forced to switch occupational categories in the coming years due to automation.
I also just posted a new law review article on SSRN, inspired by (and often borrowing from) several posts on this blog addressing the issue of the legal status of autonomous systems, including the possibility of AI personhood (spoiler alert: bad idea).
The pace of new blog posts will pick up a bit from hereon out. The goal is for Law and AI to have at least 1 new blog post per month going forward, and hopefully more. Stay tuned.
Fred Rogers may seem like a strange subject for an AI-related blog post, but bear with me. Everyone knows Mr. Rogers from his long-running PBS show, Mister Rogers’ Neighborhood. Fewer people know that he was an ordained Presbyterian minister prior to his television career. And fewer still know the quality of Fred Rogers that led me to write this post: namely, that he was a technological visionary.
You see, television was in its infancy when Mr. Rogers completed his education and was attempting to decide what to do with his life. When he saw television for the first time, he immediately recognized the new medium’s potential, both good and ill. The programming that greeted him definitely fell into the latter category. As he later recounted, with what is probably the closest Mr. Rogers ever came to betraying frustration and annoyance, “there were people throwing pies at one another.” In other interviews, he expressed dismay at the number of cartoons aimed at children that used violence as a method of entertainment.
So about a month ago, the Department of Labor released a draft strategic plan for the next 5 fiscal years. Curiously, the 40-page document made no mention of the impact of automation, which poses perhaps the greatest policy challenge that the labor market has seen since the DOL was formed 105 years ago. So I teamed up with several other attorneys at my firm and Prime Policy Group–with input from several participants in a robotics and AI roundtable that my firm hosted in DC last month–to write an open letter to the Secretary of Labor explaining why automated systems need more attention than they currently receive.
The Cliff’s Notes version of the comments is this sentence from the intro:
[T]he Department of Labor, in cooperation with other government agencies and private industry, should take proactive steps to provide American workers with the skills necessary to participate in the labor market for these emerging technologies, which promise to revolutionize the global economy and labor market during the coming years, and to implement measures designed to ensure that workers whose jobs are vulnerable to automation are not left behind.
We came up with a catchy acronym for the labor market disruption that automation causes: technology-induced displacement of employees (TIDE). It wasn’t until I was deep into working on the letter that it truly sunk in what a huge challenge this is going to be. Sadly, governments in developed countries are barely paying attention to these issues right now, despite the fact that automation appears to be right on the cusp of disrupting the labor market in seemingly every industry.
The full comments are available here.
The biggest “algorithms in the news” story of the past couple months has been whether Facebook, Twitter, and Google’s ad-targeting algorithms facilitated, however inadvertently, Russian interference in the 2016 United States Presidential election. For those who have been sleeping under a rock, hundreds of thousands of targeted advertisements containing links to fake political “news” stories were delivered to users of the three behemoths’ social media and web services. Many of the ads were microtargeted–specifically aimed to reach specific voters in specific geographic regions.
This story–which has been bubbling under the surface for months–came to the forefront this past week as executives from the three companies were hauled in front of a Congressional committee and grilled about whether they were responsible for (or, at the very least, whether they did enough to stop) the spread of Russian misinformation. The Economist’s cover story this week is on “Social media’s threat to democracy,” complete with a cover image of a human hand wielding Facebook’s iconic “f” like a gun, complete with smoke drifting off the end of the “barrel” (see below).
I’ll start this week with a couple personal updates before moving onto the biggest A.I. policy news item of the week–although the news is not quite as exciting as some media headlines have made it sound.
Personal Item #1: New gig
A few weeks ago, I switched jobs and joined a new law firm–Littler Mendelson, P.C.. In addition to being the world’s largest labor and employment law firm, Littler has a Robotics, A.I., and Automation practice group that Garry Mathiason (a legend in both employment law and robotics law) started a few years back. I’ve hit the ground running with both the firm and its practice group during the past few weeks, and my busy stretch will continue for awhile. I’ll try to make updates as much as I can, particularly when big A.I. law and policy news hits, but updates will likely be on the light side for the next several weeks.
Personal Item #2: O’Reilly A.I. Conference
Next week, I’ll be presenting at the O’Reilly A.I. Conference in San Francisco along with Danny Guillory, the head of Global Diversity and Inclusion at Autodesk. Our presentation is titled “Building an Unbiased A.I.: End-to-end diversity and inclusion in AI development.” If you’ll be at the conference, come check it out.
Personal Item #3: Drone Law Today
One last personal item–I made my second appearance on Steve Hogan’s Drone Law Today podcast. Steve and I had a fascinating conversation on the possibility of legal personhood for A.I.–both how feasible personhood is now (not very) and how society will react if and when robots do eventually start walking amongst us. This was one of the most fun and interesting conversations I’ve ever had, so check it out.
A.I. policy news: House passes SELF DRIVE Act
I’ll close with the big news item relating to A.I. policy–the U.S. House of Representatives’ passage of the SELF DRIVE Act. The bill, as its title suggests, would open up the way for self-driving cars to hit the road without having to comply with NHTSA regulations–which otherwise would present a major hurdle to the deployment of autonomous vehicles.
The Senate is also considering self-driving car legislation, and that legislation apparently differs from the House bill quite dramatically. That means that the two houses will have to reconcile their respective bills in conference, and observers of American politics (and watchers of Schoolhouse Rock) know that the bill that emerges from conference may end up looking nothing like either of the original bills. Passage of the Senate bill sounds highly likely, although congressional gridlock means that it’s still possible the bill will not come up for a vote this year. We’ll see what (if anything) emerges from the Senate, at which point we’ll hopefully have a better sense of what the final law will look like.
This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence. The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence. Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.
Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics. Here’s Etzioni’s trio:
- “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.” For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
- “[A]n A.I. system must clearly disclose that it is not human.”
- “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”
These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing. But that’s the easy part of deciding how to regulate A.I. The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing. Should we hold the designer(s) of the A.I. system accountable? Or the immediate operator? Or maybe the system itself? No one will argue with the point that an autonomous car shouldn’t run red lights. It’s less clear who should be held responsible when it does.
Etzioni’s op-ed takes no discernible position on these issues. The first rule seems to imply that the A.I. system itself should be held responsible. But since A.I. systems are not legal persons, that’s a legal impossibility at present. And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible. The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”
I don’t have any issue with where Etzioni wants us to go. I’m just not sure how he thinks we’re supposed to get there.
In 2014, Elon Musk’s warnings about the dangers and risks associated with AI helped spark the debate on what steps, if any, government and industry bodies should take to regulate the development of AI. Three years later, he’s still voicing his concerns, and this weekend he brought them up with some of the most influential politicians in America.
In a speech before the National Governors Association at their summer retreat in Rhode Island, Musk said that governments need to be proactive when it comes to managing the public risks of AI:
Here’s a quick roundup of law- and policy-relevant AI stories from the past couple weeks.
A British privacy watchdog ruled that a group of London hospitals violated patient privacy laws in sharing information with Google DeepMind. Given the constant push for access to data that all the major tech companies are making (in no small part because access to more data is crucial in the age of learning AI systems), expect to see many more data privacy disputes like this in the future.
Canada’s CTV reports on the continued push by some AI experts for “explainable” and “transparent” AI systems, as well as the skeptical response of other AI experts about the feasibility of building AI systems that can “show their work” in a useful way. Peter Norvig points to a potentially interesting workaround: