The challenge of diversity in the AI world


Let me start this post with a personal anecdote.  At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men.  Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic impact of AI included two presentations by women.  Those two women also participated in the panel discussion that immediately followed the presentations–except that “participated” might be a bit strong of a word, because the panel discussion essentially consisted of the two men on the panel arguing with each other for twenty minutes.

It gave off the uncomfortable impression (to me, at least) that even when women are seen in the AI world, it should be expected that they will immediately fade in the background once someone with a Y chromosome shows up. And the ethnic and racial diversity was scarcely better–I probably could count on one hand the number of people credentialed at the conference who were not either white or Asian.

Fast forward to this past week, when the White House’s Office of Science and Technology Policy released a request for information (RFI) on the promise and potential pitfalls of AI.  A Request for Information on AI doesn’t mean that the White House only heard about AI for the first time last week and is looking for someone to send them the link to relevant articles on Wikipedia.  Rather, a request for information issued by a governmental entity is a formal call for public comment on a particular topic that the entity wishes to examine more closely.

» Read more

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Source: Biotwist (via Deviant Art)


This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  It is cross-posted on the website of the Future of Life Institute.  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

» Read more

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  It is also cross-posted on the website of the Future of Life Institute.  Full credit to Corey Pressman for the title.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

» Read more

Doctors and Lawyers: There’s an AI app for that (but not really)

Source: twentysomethinglawyer.wordpress.com

Over the past few weeks, stories have broken suggesting that AI is breaking through into two of the world’s most venerable professions: law and medicine. A couple weeks ago, stories reported that a major law firm had hired an AI-based “lawyer,” and the Daily Mail ran a story this weekend on a new health app called Check, declaring: “It’s man versus robot in the battle of the doctors: World’s first ‘artificial intelligence’ medic set to be pitted against the real thing in landmark experiment for medicine.”  As always, the media headlines make these technologies sound much more impressive than they actually are.  Both of these technologies sound like more convenient versions of existing tools that doctors, lawyers, and non-professionals alike already use on a daily basis.

» Read more