The challenge of diversity in the AI world


Let me start this post with a personal anecdote.  At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men.  Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic impact of AI included two presentations by women.  Those two women also participated in the panel discussion that immediately followed the presentations–except that “participated” might be a bit strong of a word, because the panel discussion essentially consisted of the two men on the panel arguing with each other for twenty minutes.

It gave off the uncomfortable impression (to me, at least) that even when women are seen in the AI world, it should be expected that they will immediately fade in the background once someone with a Y chromosome shows up. And the ethnic and racial diversity was scarcely better–I probably could count on one hand the number of people credentialed at the conference who were not either white or Asian.

Fast forward to this past week, when the White House’s Office of Science and Technology Policy released a request for information (RFI) on the promise and potential pitfalls of AI.  A Request for Information on AI doesn’t mean that the White House only heard about AI for the first time last week and is looking for someone to send them the link to relevant articles on Wikipedia.  Rather, a request for information issued by a governmental entity is a formal call for public comment on a particular topic that the entity wishes to examine more closely.

» Read more

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Source: Biotwist (via Deviant Art)


This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  It is cross-posted on the website of the Future of Life Institute.  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

» Read more

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  It is also cross-posted on the website of the Future of Life Institute.  Full credit to Corey Pressman for the title.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

» Read more

Doctors and Lawyers: There’s an AI app for that (but not really)

Source: twentysomethinglawyer.wordpress.com

Over the past few weeks, stories have broken suggesting that AI is breaking through into two of the world’s most venerable professions: law and medicine. A couple weeks ago, stories reported that a major law firm had hired an AI-based “lawyer,” and the Daily Mail ran a story this weekend on a new health app called Check, declaring: “It’s man versus robot in the battle of the doctors: World’s first ‘artificial intelligence’ medic set to be pitted against the real thing in landmark experiment for medicine.”  As always, the media headlines make these technologies sound much more impressive than they actually are.  Both of these technologies sound like more convenient versions of existing tools that doctors, lawyers, and non-professionals alike already use on a daily basis.

» Read more

Notes from the 2016 Governance of Emerging Technologies Conference

Source: Frank Cotham/The New Yorker


This past week, I attended the fourth annual Governance of Emerging Technologies Conference at Arizona State’s Sandra Day O’Connor School of Law.  The symposium’s format included a number of sessions that ran concurrently, so I ended up having to miss several presentations that I wanted to see.  But the ones I did manage to catch were very informative.  Here are some thoughts.

The conference was a sobering reminder of why AI is not a major topic on the agenda of governments and international organizations around the world: there are a whole lot of emerging technologies posing new ethical questions and creating new sources of risk.  Nanotechnology, bioengineering, and the “Internet of Things” all are raising new issues that policymakers must analyze.  To make matters worse, governments the world over are not even acting with the necessary urgency on comparatively longstanding sources of catastrophic risk such as climate change, global financial security, political and social instability in the Middle East, and both civil and military nuclear security.  So it shouldn’t be surprising that AI is not at the top of the agenda in Washington, Brussels, Beijing, or anywhere else outside Silicon Valley, and there is no obvious way to make AI-writ-large a higher policy priority in the immediate future without engaging in disingenuous scaremongering.

» Read more

NHTSA and Autonomous Vehicles (Part 3): Hearings and Strange Bedfellows

BowAd1UCIAAg1xJ


This is the final segment in a three-part series on NHTSA and autonomous vehicles.  The first two parts can be read here and here.


So what went down at NHTSA’s two public hearings?  I could not find video of the first hearing, which was held in Washington DC, and so I’ve relied on press reports of the goings-on at that initial hearing.  The full video of the second hearing, which was held in Silicon Valley, is available on YouTube.

Most of the speakers at these two hearings were representatives of tech and automotive industry companies, trade organizations, and disability advocacy groups who touted the promise and benefits that AV technologies will bring.  Already, vehicles with automated features have a level of situational awareness that even the most alert human driver could never hope to match.  Sensors and cameras can detect everything that is going on around the vehicle in every direction–and AI systems can ‘focus’ on all that information more-or-less simultaneously.  Human drivers, by contrast, have a limited field of vision and have trouble maintaining awareness of everything that is going on even in that narrow field.

AI drivers also won’t get drunk, get tired, or text while driving.  (Well, actually they could send texts while driving, but unlike with humans, doing so would not hinder their ability to safely operate a vehicle).  Their reaction time can make human drivers look like sloths.  Perhaps most significantly, they could give people with physical disabilities the ability to commute and travel without the need to rely on other people to drive them.  If you follow developments in the field, then all of that is old news–but that does not make it any less enticing.

» Read more

NHTSA and Autonomous Vehicles (Part 2): Will Regulations (Or Lack Thereof) Keep Automated Vehicle Development Stuck in Neutral?

Source: DailyMail.com

Source: DailyMail.com


This is part 2 of a series on NHTSA and Autonomous Vehicles.  Part 1, published May 8, discussed the 5 levels of automation that NHTSA established, with Level 0 being a completely human controlled car and Level 4 being a vehicle that is capable of completely autonomous operation on the roads.  Part 3 discusses NHTSA’s April 2016 public hearings on the subject.


I must confess that I am very much an optimist about the promise of Level 4 vehicles–and not just because I really, really love the idea of having the ability to do stuff on my commute to work without having to scramble for one of the 2 good seats on a Portland bus (yes, there are always only 2). The potential benefits that autonomous vehicles could bring are already well-publicized, so I won’t spend much time rehashing them here.  Suffice it to say, in addition to the added convenience of AVs, such vehicles should prove to be far safer than vehicles controlled by human drivers and would provide persons with physical disabilities with a much greater ability to get around without having to rely on other people.

But while I am optimistic about the benefits of Level 4 vehicles, I am not optimistic that NHTSA–and NHTSA’s counterparts in other countries–will act quickly enough to ensure that Level 4 vehicles will be able to hit the road as soon as they could and should.  As prior posts have noted, there are few federal regulations (i.e., rules that appear in the Federal Motor Vehicle Safety Standards) that would present a significant obstacle to vehicles with up to Level 3 automation.  But going from Level 3 to Level 4 may present difficulties–especially if, as in the case of Google’s self-driving car, the vehicle is designed in a manner (e.g., without a steering wheel, foot brakes, or transmission stick) that makes it impossible for a human driver to take control of the vehicle.

The difficulty of changing regulations to allow Level 4 vehicles creates a risk that automated vehicle technology will be stuck at Level 2 and Level 3 for a long time–and that might be worse than the current mix of Level 0, Level 1, and ‘weak’ Level 2 vehicles that fill most of the developed world’s roads.

» Read more

NHTSA and Autonomous Vehicles (Part 1): The 5 levels of automation

Dilbert


During the last month, the National Highway Traffic Safety Administration (“NHTSA,” the agency that didn’t redefine “driver” in February) held two public hearings on autonomous vehicles (“AVs”), one in Washington DC on April 8 and another at Stanford, in the heart of Silicon Valley, on April 27.  In keeping with what you might expect, press reports of the two events suggested that the Silicon Valley gathering attracted the voices of people more enthusiastic about the promise of AVs and more intent on urging NHTSA not to let regulations stifle innovation in the field.

These public hearings are an important and positive sign that the NHTSA is serious about moving forward with the regulatory changes that will be necessary before autonomous vehicles become available to the general public.  But before turning to what went down at these hearings (and to buy some time for me to watch through the full video of the second hearing), it’s worth pausing to give some background on NHTSA’s involvement with autonomous vehicles.

NHTSA has shown increasing interest in automation since 2013, when it issued an official policy statement that defined five levels of vehicle automation.

» Read more

Too smart for our own good?

Source: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams


Two stories this past week caught my eye.  The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip.  Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of around $2.5 billion worth of research and development at the hands of thousands of computer engineers.”  Nvidia CEO Jen-Hsun Huang said that the chip was designed and dedicated “to accelerating AI; dedicated to accelerating deep learning.”  But the revolutionary potential of the P100 is dependent on AI engineers coming up with new algorithms that can leverage the full range chip’s capabilities.  Absent such advances, Huang says that the P100 would end up being the “world’s most expensive brick.”

The development of the P100 demonstrates, in case we needed a reminder, the immense technical advances that have been made in computing power in recent years and highlights the possibilities those developments raise for AI systems that can be designed to perform (and even learn to perform) an ever-increasing variety of human tasks.  But an essay by Adam Elkus that appeared this week in Slate questions whether we have the ability–or for that matter, will ever have the ability–to program an AI system with human values.

I’ll open with a necessary criticism: much of Elkus’s essay seems like an extended effort to annoy Stuart Russell.  (The most amusing moment in the essay is when Elkus suggested that Russell, who literally wrote the book on AI, needs to bone up on his AI history.) Elkus devotes much of his virtual ink to cobbling together out-of-context snippets from a year-old interview that Russell gave to Quanta Magazine and using those snippets to form strawman arguments that Elkus then attributes to Russell.  But despite the strawmen and snide comments, Elkus makes some good points on the vexing issue of how to program ethics and morality into AI systems.

» Read more

Selective Revelation: Should we let robojudges issue surveillance and search warrants?

Credit: SimplySteno Court Reporting Blog

Credit: SimplySteno Court Reporting Blog


AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers.  Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight.  But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.

Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.”  Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.

She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant:

» Read more

1 3 4 5 6 7