Is AI personhood already possible under U.S. LLC laws? (Part Three)


This is the final installment of a three-part series examining whether legal personhood is already possible under US laws governing limited liability companies (LLCs), which Shawn Bayern suggests provide an active path to personhood for autonomous systems. The first two posts in this series examined the two legal sources (New York’s LLC law and the Revised Uniform LLC Act) that Bayern used to support his contention that it is possible to use LLC laws to create an autonomous AI system with, for all intents and purposes, legal personhood.

The specific mechanism that Bayern proposed is creating an LLC whose operating agreement that effectively places the LLC under the control of an AI system, and then have every member of the LLC withdraw, leaving the system effectively unsupervised.  I concluded from my own review of New York’s law and the laws of six states that have adopted RULLCA in some form that they do not provide a vehicle for creating LLC’s of the type Bayern described.  The purpose of this final post is to examine a few other states’ LLC laws to see if my conclusions for New York and the RULLCA states are generalizable to other state laws.

» Read more

On AI, prescription drugs, and managing the risks of things we don’t understand

Source: IWSMT


Last month, Technology Review published a good article discussing the “dark secret at the heart of AI”–namely, that “[n]o one really knows how the most advanced algorithms do what they do.”  The opacity of algorithmic systems is something that has long drawn attention and criticism.  But it is a concern that has broadened and deepened in the past few years, during which breakthroughs in “deep learning” have led to a rapid increase in the sophistication of AI.  These deep learning systems operate using deep neural networks that are designed to roughly simulate the way the human brain works–or, to be more precise, to simulate the way the human brain works as we currently understand it.

Such systems can effectively “program themselves” by creating much or most of the code through which they operate.  The code generated by such systems can be very complex.  It can be so complex, in fact, that even the people who built and initially programmed the system may not be able to fully explain why the systems do what they do:

» Read more

Is AI personhood already possible under U.S. LLC laws? (Part Two: Uniform LLC Act)


This will, as it turns out, be a three-part series examining whether legal personhood is already possible under US laws governing limited liability companies (LLCs), which Shawn Bayern suggests provide an active path to personhood for autonomous systems.  Bayern relied primarily on two sources of law: New York’s LLC statute, and the Revised Uniform LLC Act (RULLCA).  Last week’s post explained why New York’s statute does not appear to provide a plausible path to AI personhood.  This week’s will take the same critical approach to RULLCA and, more importantly, the states that have adopted some variation of RULLCA.

» Read more

Is AI personhood already possible under U.S. LLC laws? (Part One: New York)

Forewarning, this will be far longer and far more of a technical legal post than usual.  It is also part 1 of what will be a 3-part post.  Part 2 is posted here, and Part 3 is posted here.

One particularly hot topic in the world of law and AI is that of “artificial personhood.”  The usual framing of this issue is: “should we grant ‘legal personhood’ to A.I. systems and give them legal recognition in the same way that the law recognizes corporations and natural persons?”  This is, to be sure, an excellent question, and artificial personhood is one of my favorite topics to discuss and write about.

But some authors in the past few years, most notably Shawn Bayern, have gone one step further, claiming that existing laws already permit the recognition of AI personhood for all intents and purposes.  Bayern focuses his attention primarily on the prospect of a “Zero-Member” or “memberless” LLC.  (“Members” of a LLC are roughly analogous to partners in a partnership).

» Read more

Questions from a young reader

Credit: Tom Toles, The Buffalo News, 1997


Last week I got an email from Will, an 8th Grader from Big D (little A, double L, A, S).  He is in a class where the students get to choose a topic to write about, and he chose AI because he had “always wondered about what makes a machine better than humans in an area.”

Will emailed me wanting to know if I could answer some questions he had about AI and its impact on our society.  I happily agreed, and he responded by sending five excellent questions.  After getting approval from Will and his teacher (thanks, Ms. Peterson!), I am posting Will’s questions and my responses below.  (I also sent Will an email with much shorter responses so that he wouldn’t fall asleep halfway through my answers).

Here they are:

 

What are your thoughts on the rapidly increasing investment in AI of huge companies such as Google and Microsoft?

This is one of the hottest topics in the world of AI policy right now.  In some ways, the investment in AI by these companies is a good thing.  There are so many things we could do with better AI systems, from having more accurate weather forecasts to reducing traffic on highways to helping doctors come up with better diagnoses when someone is sick.  Those things would bring great benefits to lots of people, and they could happen much more quickly if big companies focus their time and money on improving AI.

On the other hand, there are always dangers when big companies get too much power.  The usual way that we deal with those dangers has been through government action.  But modern AI technologies are very complicated—so complicated that sometimes even the people who design them may not totally understand why they do what they do!  It is hard to come up with good rules for things that no one completely understands.

» Read more

California’s latest autonomous vehicle regulations

Credit: Mike Keefe


The ABA’s Science & Technology Law section has an AI and Robotics committee that holds a monthly teleconference “meetup” where a guest speaker presents on an AI/Robotics-related legal issue.  From here forward, I’ll be making a brief post on each monthly meetup.

For the April meetup, Michele Kyrouz gave a presentation on California’s updated autonomous vehicle (AV) regulations.  I wrote a post last fall discussing the new rules governing AV advertising and marketing, and intended to do a longer post discussing the regulation changes as a whole.  This month’s meetup gave me the kick in the pants I needed to actually do that.

» Read more

WeRobot 2017: Fault, liability, and regulation


The last panel of WeRobot 2017 produced what were perhaps my two favorite papers presented at the conference: “An Education Theory of Fault for Autonomous Systems” by Bill Smart and Cindy Grimm of Oregon State University’s Robotics Program and Woodrow Hartzog of Stanford Law School, and “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence,” by Michael Guihot, Anne Matthew, and Nicolas Suzor of the Queensland University of Technology.

It’s not surprising that both of these papers made an impression on me because each dealt with topics near and dear to my nerdy heart.  “An Education Theory of Fault” addresses with the thorny issue of how to determine culpability and responsibility when an autonomous system causes harm, in light of the inherent difficulty in predicting how such systems will operate.  “Nudging Robots” deals with the equally challenging issue of how to design a regulatory system that can manage the risks associated with AI.  Not incidentally, those are perhaps the two issues to which I have devoted the most attention in my own writings (both blog and scholarly).  And these two papers represent some of the strongest analysis I have seen on those issues.

» Read more

Poll shows that support for national and international regulation of AI is broad, but is it deep?

Source: Calvin and Hobbes, Bill Watterson, Oct 27, 1987


Yesterday, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively).

Perhaps even more strikingly, the proportion of respondents who support regulation was very consistent across political and socioeconomic lines.  A full 74% of Republicans, 73% of Democrats, and 65% of independents support national regulations, as do 69% of people making less than $50k/yr, 73% making $50k-$100k, and 65% of those who make more than $100k.  Education likewise matters little: 70% of people without a college degree support national regulation, along with 74% of college grads and 70% of respondents with post-graduate degrees.  Women (75%) were slightly more likely to support such regulations than men (67%).

» Read more

The Return of the Blog: WeRobot 2017


After a long layoff, Law and AI returns with some brief takes on the 6th annual WeRobot Conference, which was held this past weekend at Yale Law School’s Information Society Project.  If you want a true blow-by-blow account of the proceedings, check out Amanda Levendowski’s Twitter feed.  Consider the below a summary of things that piqued my interest, which will not necessarily be the same as the things that prove to be the most important technical or policy takeaways from the conference.

Luisa Scarcella and Michaela Georgina Lexer: The effects of artificial intelligence on labor markets – A critical analysis of solution models from a tax law and social security law perspective

(Paper, Presentation)

Ms. Scarcella and Ms. Lexer presented perhaps the most topically unique paper of the conference.  Their paper addresses the potential macroeconomic, social, and government-finance impacts of automation.

» Read more

Bias


An interesting pair of stories popped up over the past month covering how the use of AI could affect bias in our society.  This is a fascinating topic from a “law and AI” standpoint due to the sheer number of laws in place worldwide that prohibit certain forms of bias and discrimination in a variety of settings, ranging from employment to hotel accommodations to the awarding of government contracts.

At first blush, one might think that having an automated system make decisions would reduce the risk of bias, or at least those forms of bias that the law prohibits.  After all, such a system would not be susceptible to the many of the most obvious types of biases and prejudices that afflict human decision-makers.  A machine would not have a financial interest in the outcome of any decision (at least not yet), nor would it be susceptible to the dark impulses of racism and sexism.  A machine likewise would presumably be less susceptible to, if not immune from, the more subtle and sometimes even unconscious manifestations of bias that emotion-driven humans exhibit.

Those advantages led Sharon Florentine to pen an article published last month in CIO with a bold headline: “How artificial intelligence can eliminate bias in hiring.”  That title was probably clickbait to a certain extent because the article itself was fairly measured in its assessment of the potential impact of AI on workplace discrimination.  The thesis of the article is that AI systems could be used indirectly to reduce bias by using machine learning to “be an objective observer to screen for bias patterns.”  In other words, AI systems could act as something of a bias detector, raising alerts when a person or company’s decision-making patterns display signs of bias or prejudice.


Kristian Hammond over at TechCrunch, on the other hand, wrote an article indicating how AI systems can actually generate or reinforce bias.  She goes over five potential sources of bias in AI systems:

  • “Data-driven bias.”  This occurs when a learning AI system that learns from a “training set” of data is fed a skewed or unrepresentative training set.  Think of the Beauty.ai “pageant.”
  • “Bias from interaction.”  This occurs when a machine that learns from interactions with other users ends up incorporating those users’ biases.  Tay the Racist Chatbot is an obvious example of this.
  • “Emergent bias.”  Think of this as self-reinforcing bias.  It’s what happens when Facebook’s news feed algorithms recognize that a particular user likes reading articles from a particular political viewpoint and, because they are programmed to predict what that user might to read next, ends up giving the user more and more stories from that viewpoint.  It seems to me that this is pretty much an extension of the first two types of bias.
  • “Similarity bias.”  Hammond’s description makes this sound very similar to emergent bias, using the example of Google News, which will often turn up similar stories in response to a user search query.  This can often lead to many stories being presented that are written from the same point of view and excluding stories written from a contrary point of view.
  • “Conflicting goals bias.”  I honestly have no idea what this one is about.  The example Hammond provides does not give me a clear sense of what this type of bias is supposed to be.

Hammond ended on a positive note, noting that knowledge of these potential sources of bias will allow us to design around them, stating, “Perhaps we will never be able to create systems and tools that are perfectly objective, but at least they will be less biased than we are.”

I have a feeling Hammond’s piece was meant to be much longer but ultimately was cut down for readability.  I’d be interested to see a longer exploration of this subject because of the obvious legal implications of AI-generated bias…especially given that I will be writing a paper on the subject for this year’s WeRobot conference.

1 2 3 4 7