On algorithms and fake news


The biggest “algorithms in the news” story of the past couple months has been whether Facebook, Twitter, and Google’s ad-targeting algorithms facilitated, however inadvertently, Russian interference in the 2016 United States Presidential election.  For those who have been sleeping under a rock, hundreds of thousands of targeted advertisements containing links to fake political “news” stories were delivered to users of the three behemoths’ social media and web services.  Many of the ads were microtargeted–specifically aimed to reach specific voters in specific geographic regions.

This story–which has been bubbling under the surface for months–came to the forefront this past week as executives from the three companies were hauled in front of a Congressional committee and grilled about whether they were responsible for (or, at the very least, whether they did enough to stop) the spread of Russian misinformation.  The Economist’s cover story this week is on “Social media’s threat to democracy,” complete with a cover image of a human hand wielding Facebook’s iconic “f” like a gun, complete with smoke drifting off the end of the “barrel” (see below).

» Read more

Personal updates and the House SELF DRIVE Act


 

I’ll start this week with a couple personal updates before moving onto the biggest A.I. policy news item of the week–although the news is not quite as exciting as some media headlines have made it sound.

Personal Item #1: New gig

A few weeks ago, I switched jobs and joined a new law firm–Littler Mendelson, P.C..  In addition to being the world’s largest labor and employment law firm, Littler has a Robotics, A.I., and Automation practice group that Garry Mathiason (a legend in both employment law and robotics law) started a few years back.  I’ve hit the ground running with both the firm and its practice group during the past few weeks, and my busy stretch will continue for awhile.  I’ll try to make updates as much as I can, particularly when big A.I. law and policy news hits, but updates will likely be on the light side for the next several weeks.

Personal Item #2: O’Reilly A.I. Conference

Next week, I’ll be presenting at the O’Reilly A.I. Conference in San Francisco along with Danny Guillory, the head of Global Diversity and Inclusion at Autodesk.  Our presentation is titled “Building an Unbiased A.I.: End-to-end diversity and inclusion in AI development.”  If you’ll be at the conference, come check it out.

Personal Item #3: Drone Law Today

One last personal item–I made my second appearance on Steve Hogan’s Drone Law Today podcast.  Steve and I had a fascinating conversation on the possibility of legal personhood for A.I.–both how feasible personhood is now (not very) and how society will react if and when robots do eventually start walking amongst us.  This was one of the most fun and interesting conversations I’ve ever had, so check it out.

A.I. policy news: House passes SELF DRIVE Act

I’ll close with the big news item relating to A.I. policy–the U.S. House of Representatives’ passage of the SELF DRIVE Act.  The bill, as its title suggests, would open up the way for self-driving cars to hit the road without having to comply with NHTSA regulations–which otherwise would present a major hurdle to the deployment of autonomous vehicles.

The Senate is also considering self-driving car legislation, and that legislation apparently differs from the House bill quite dramatically.  That means that the two houses will have to reconcile their respective bills in conference, and observers of American politics (and watchers of Schoolhouse Rock) know that the bill that emerges from conference may end up looking nothing like either of the original bills.  Passage of the Senate bill sounds highly likely, although congressional gridlock means that it’s still possible the bill will not come up for a vote this year.  We’ll see what (if anything) emerges from the Senate, at which point we’ll hopefully have a better sense of what the final law will look like.

How to Regulate Artificial Intelligence Without Really Trying


This past Friday, the New York Times published an op-ed by Oren Etzioni, head of the Allen Institute for Artificial Intelligence.  The op-ed’s title is “How to Regulate Artificial Intelligence,” but the piece actually sheds little light on “how” legal systems could go about regulating artificial intelligence.  Instead, it articulates a few specific “rules” without providing any suggestions as to how those rules could be implemented and enforced.

Specifically, Etzioni proposes three laws for AI regulation, inspired in number (if not in content) by Isaac Asimov’s famous Three Laws of Robotics.  Here’s Etzioni’s trio:

  1. “[A]n A.I. system must be subject to the full gamut of laws that apply to its human operator.”  For example, “[w]e don’t want autonomous vehicles that drive through red lights” or AI systems that “engage in cyberbullying, stock manipulation or terrorist threats.”
  2. “[A]n A.I. system must clearly disclose that it is not human.”
  3. “[A]n A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”

These rules do a nice job of laying out the things that we don’t want A.I. systems to be doing.  But that’s the easy part of deciding how to regulate A.I.  The harder part is figuring out who or what should be held legally responsible when A.I. systems do those things that we don’t want them to be doing.  Should we hold the designer(s) of the A.I. system accountable?  Or the immediate operator?  Or maybe the system itself?  No one will argue with the point that an autonomous car shouldn’t run red lights.  It’s less clear who should be held responsible when it does.

Etzioni’s op-ed takes no discernible position on these issues.  The first rule seems to imply that the A.I. system itself should be held responsible.  But since A.I. systems are not legal persons, that’s a legal impossibility at present.  And other portions of Etzioni’s op-ed seem to suggest either that the operator or the designer should be held responsible.  The result is a piece that punts on the real challenge of figuring out “How to Regulate Artificial Intelligence.”

I don’t have any issue with where Etzioni wants us to go.  I’m just not sure how he thinks we’re supposed to get there.

Elon Musk tells American governors why governments should be “proactive” about managing AI risks

In 2014, Elon Musk’s warnings about the dangers and risks associated with AI helped spark the debate on what steps, if any, government and industry bodies should take to regulate the development of AI.  Three years later, he’s still voicing his concerns, and this weekend he brought them up with some of the most influential politicians in America.

In a speech before the National Governors Association at their summer retreat in Rhode Island, Musk said that governments need to be proactive when it comes to managing the public risks of AI:

» Read more

Law and AI Quick Hits: Canada Day / Fourth of July edition

Credit: Randy Glasbergen


Here’s a quick roundup of law- and policy-relevant AI stories from the past couple weeks.

A British privacy watchdog ruled that a group of London hospitals violated patient privacy laws in sharing information with Google DeepMind.  Given the constant push for access to data that all the major tech companies are making (in no small part because access to more data is crucial in the age of learning AI systems), expect to see many more data privacy disputes like this in the future.


Canada’s CTV reports on the continued push by some AI experts for “explainable” and “transparent” AI systems, as well as the skeptical response of other AI experts about the feasibility of building AI systems that can “show their work” in a useful way.  Peter Norvig points to a potentially interesting workaround:

» Read more

Duelling perspectives on how AI will affect economic inequality


Two opinion pieces were published this weekend–the second written in response to the first–on the issue of whether and how the rise of AI, robotics, and automation will affect another notable trend in modern society: economic inequality.  Both authors make some intriguing points.  But unfortunately, both also seem to have an unwarranted level of certainty about how AI will affect our economy and society.

» Read more

Is AI personhood already possible under U.S. LLC laws? (Part Three)


This is the final installment of a three-part series examining whether legal personhood is already possible under US laws governing limited liability companies (LLCs), which Shawn Bayern suggests provide an active path to personhood for autonomous systems. The first two posts in this series examined the two legal sources (New York’s LLC law and the Revised Uniform LLC Act) that Bayern used to support his contention that it is possible to use LLC laws to create an autonomous AI system with, for all intents and purposes, legal personhood.

The specific mechanism that Bayern proposed is creating an LLC whose operating agreement that effectively places the LLC under the control of an AI system, and then have every member of the LLC withdraw, leaving the system effectively unsupervised.  I concluded from my own review of New York’s law and the laws of six states that have adopted RULLCA in some form that they do not provide a vehicle for creating LLC’s of the type Bayern described.  The purpose of this final post is to examine a few other states’ LLC laws to see if my conclusions for New York and the RULLCA states are generalizable to other state laws.

» Read more

On AI, prescription drugs, and managing the risks of things we don’t understand

Source: IWSMT


Last month, Technology Review published a good article discussing the “dark secret at the heart of AI”–namely, that “[n]o one really knows how the most advanced algorithms do what they do.”  The opacity of algorithmic systems is something that has long drawn attention and criticism.  But it is a concern that has broadened and deepened in the past few years, during which breakthroughs in “deep learning” have led to a rapid increase in the sophistication of AI.  These deep learning systems operate using deep neural networks that are designed to roughly simulate the way the human brain works–or, to be more precise, to simulate the way the human brain works as we currently understand it.

Such systems can effectively “program themselves” by creating much or most of the code through which they operate.  The code generated by such systems can be very complex.  It can be so complex, in fact, that even the people who built and initially programmed the system may not be able to fully explain why the systems do what they do:

» Read more

Is AI personhood already possible under U.S. LLC laws? (Part Two: Uniform LLC Act)


This will, as it turns out, be a three-part series examining whether legal personhood is already possible under US laws governing limited liability companies (LLCs), which Shawn Bayern suggests provide an active path to personhood for autonomous systems.  Bayern relied primarily on two sources of law: New York’s LLC statute, and the Revised Uniform LLC Act (RULLCA).  Last week’s post explained why New York’s statute does not appear to provide a plausible path to AI personhood.  This week’s will take the same critical approach to RULLCA and, more importantly, the states that have adopted some variation of RULLCA.

» Read more

Is AI personhood already possible under U.S. LLC laws? (Part One: New York)

Forewarning, this will be far longer and far more of a technical legal post than usual.  It is also part 1 of what will be a 3-part post.  Part 2 is posted here, and Part 3 is posted here.

One particularly hot topic in the world of law and AI is that of “artificial personhood.”  The usual framing of this issue is: “should we grant ‘legal personhood’ to A.I. systems and give them legal recognition in the same way that the law recognizes corporations and natural persons?”  This is, to be sure, an excellent question, and artificial personhood is one of my favorite topics to discuss and write about.

But some authors in the past few years, most notably Shawn Bayern, have gone one step further, claiming that existing laws already permit the recognition of AI personhood for all intents and purposes.  Bayern focuses his attention primarily on the prospect of a “Zero-Member” or “memberless” LLC.  (“Members” of a LLC are roughly analogous to partners in a partnership).

» Read more

1 2 3 4 7