The Partnership on AI: A step in the right direction

Well, by far the biggest AI news story to hit the papers this week was the announcement that a collection of tech industry heavyweights–Microsoft, IBM, Amazon, Facebook, and Google–are joining forces to form a “Partnership on AI”:

The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.

 

 

“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”

There’s no question this is welcome news.  Each of the five companies who formed this group had been part of the “AI arms race” that has played out over the past few years, when major tech companies have invested massive amounts of money in expanding their AI research, both by acquiring other companies and by recruiting talent.  To a mostly-outside observer such as myself, it seemed for a time like the arms race was becoming an end unto itself–companies were investing massive amounts of money without thinking about the long-term implications of AI development.  This is a good sign that the titans of tech are, indeed, seeing the bigger picture.

» Read more

A peek at how AI could inadvertently reinforce discriminatory policies

Source: Before It's News

Source: Before It’s News


The most interesting story that came up during Law and AI’s little hiatus came from decidedly outside the usual topics covered here–the world of beauty pageants.  Well, sort of:

An online beauty contest called Beauty.ai, run by Youth Laboratories . . . ., solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age.

Sounds harmless enough, aside from the whole “we’re teaching computers to objectify women” aspect.  But the results of this contest carry some troubling implications.

Of the 44 winners in the pageant, 36 (or 82%) were white.  In other words, white people were disproportionately represented among the pageant’s “winners.”  This couldn’t help but remind me of discrimination law in the legal world.  The algorithm’s beauty assessments had what lawyers would recognize as a disparate impact–that is, despite the fact that the algorithm seemed objective and non-discriminatory at first glance, it ultimately favored whites at the expense of other racial groups.

The concept of disparate impact is best known in employment law and in college admissions, where a company or college can be liable for discrimination if its policies have a disproportionate negative impact on protected groups, even if the people who came up with the policy had no discriminatory intent. For example, a hypothetical engineering company might select which applicants to interview for a set of open job positions by coming up with a formula that awards 1 point to an applicant with a college degree in engineering, 3 years for a Master’s degree, and 6 points for a doctorate, and additional points for certain prestigious fellowships.  Facially, this system appears neutral in terms of race, gender, and socioeconomic status.  But in its outcomes, it may (and probably would) end up having a disparate impact if the components of the test score are things that wealthy white men are disproportionately more likely to have due to their social and economic advantages.

The easiest way to get around this problem might be to use a quota–i.e., set aside a certain proportion of the positions for applicants from underserved minority groups and then apply the ‘objective’ test to rate applicants within each group.  But such overt quotas are also illegal (according to the Supreme Court) because they constitute disparate treatment.  What about awarding “bonus points” under the objective test to people from disadvantaged groups?  Well, that would also be disparate treatment.  Certainly, nothing prevents an employer from using race as, to borrow a phrase from Equal Protection law, a subjective “plus factor” to help ensure diversity.  But you can’t assign a specific number related to the race or gender of applicants.  The bottom line is that the law likes to keep assessments very subjective when they involve sensitive personal characteristics such as race and gender.

Which brings us back to AI.  You can have an algorithm that approximates or simulates a subjective assessment, but you still have to find a way to program that assessment into the AI–which means reducing the subjective assessment to an objective and concrete form. It would be difficult-to-impossible to program a truly subjective set of criteria into an AI system because a subjective algorithm is almost a contradiction in terms.

Fortunately for Beauty.ai, it can probably solve its particular “disparate impact” problem without having the algorithm discriminate based on race.  The reason why Beauty.ai generated a disproportionate number of white winners is that the data sets (i.e. images of people) that were used to build the AI’s ‘objective’ algorithm for assessing beauty consisted primarily of images of white people.

As a result, the algorithm’s accuracy dropped when it runs into the images of people who don’t fit the patterns in the data set that was used to prime the algorithm.  To fix that, the humans just need to include a more diverse data set–and since humans are doing that bit, the process of choosing who is included in the original data set could be subjective, even if the algorithm that uses the data set cannot be.

For various reasons, however, it would be difficult to replicate that process in the contexts of employment, college admissions, and other socially and economically vital spheres.  I’ll be exploring this topic in greater detail in a forthcoming law practice article that should be appearing this winter.  Stay tuned!

Should AI systems in mental health settings have a duty to warn?


A brief item that I could not resist leaving a quick comment on.  The Atlantic posted a fascinating story last week on a machine learning program that could help make more accurate psychiatric diagnoses.  The system currently in place is a “schizophrenia screener” that analyzes primary care patients’ speech patterns for some of the tell-tale verbal ‘tics’ that can be a predictor of psychosis.  For now, as the author points out, there are many weaknesses with widespread deployment of such a system because there are so many cultural, ethnic, and other differences in speech and behavior that could throw the system off.  But still, the prospect of an AI system playing a role in determining whether a person has a mental disorder raises some intriguing questions.

The lawyer in me immediately thought “could the Tarasoff rule apply to AI systems?”  For those of you who are normal, well-adjusted human beings (i.e., not lawyers), Tarasoff was a case where the California Supreme Court held that a psychiatrist could be held liable if the psychiatrist knows that a patient under his or her care poses a physical danger to someone and fails to take protective measures (e.g., by calling the police or warning the potential victim(s)).

Now granted, predicting violence is probably a much more difficult task than determining whether someone has a specific mental disorder.  But it’s certainly not out of the realm of possibility that a psychiatric AI system could be designed that analyzes a patient’s history, the tone and content of a patient’s speech, etc, and comes up with a probability that the patient will commit a violent act in the near future.

Let’s say that such a violence-predicting AI system is designed for use in medical and psychiatric settings.  The system is programmed to report to a psychiatrist when it determines that the probability of violence is above a certain threshold–say 40%.  The designers set up the system so that once makes its report, its job is done; it’s ultimately up to the psychiatrist to determine whether a real threat of violence exists and, if so, what protective measures to take.

But let’s say that the AI system determines that there is a 95% probability of violence, and that studies have shown that the system does better than even experienced human psychiatrists in predicting violence. Should the system still be designed so it can do nothing except report the probability of violence to a psychiatrist, despite the risk that the psychiatrist may not take appropriate action?  Or should AI systems have a freestanding Tarasoff-like duty to warn police?

Given that psychiatry is one of the more subjective fields of medicine, it will be interesting to see how the integration of AI in the mental health sector plays out.  If AI systems prove to be, on average, better than humans at making psychiatric diagnoses and assessing risks of violence, would we still want a human psychiatrist to have the final say–even though it might mean worse decisions on balance?  I have a feeling we’ll have to confront that question some day.

Digital Analogues (part 5): Lessons from Animal Law, Continued


The last post in this series on “Digital Analogues”–which explores the various areas of law that courts could use as a model for liability when AI systems cause harm–examined animal liability law.  Under traditional animal liability law, the owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  For domesticated animals, however, an owner is only liable if that particular animal had shown dangerous tendencies and the owner failed to take adequate precautions.

So what lessons might animal liability law offer for AI? Well, if we believe that AI systems are inherently risky (or if we just want to be extra cautious), we could treat all AI systems like “wild” animals and hold their owners strictly liable for harms that they cause. That would certainly encourage safety precautions, but it might also stifle innovation.  Such a blanket rule would seem particularly unfair for AI systems whose functions are so narrow that they do not present much risk to anyone. It would seem somewhat silly to impose a blanket rule that treats AlphaGo as if it is just as dangerous as an autonomous weapon system.

» Read more

Digital Analogues (Part 4): Is AI a Different Kind of Animal?

Source: David Shankbone


The last two entries in this series focused on the possibility of treating AI systems like “persons” in their own right.  As with corporations, these posts suggested, legal systems could develop a doctrine of artificial “personhood” for AI, through which AI systems would be given some of the legal rights and responsibilities that human beings have.  Of course, treating AI systems like people in the eyes of the law will be a bridge too far for many people both inside the legal world and in the public at large.  (If you doubt that, consider that corporate personhood is a concept that goes back to the Roman Empire’s legal system, and it still is highly controversial)

In the short-to-medium term, it is far more likely that instead of focusing on what rights and responsibilities an AI system should have, legal systems will instead focus on the responsibilities of the humans who have possession or control of such systems. From that perspective, the legal treatment of animals provides an interesting model.

» Read more

IBM’s Response to the Federal Government’s Request for Information on AI

IBM's Watson computing system is made up of electronically generated graphic compositions in which computer algorithms define the shape, texture and motion.The visual identity provides a peak at what a computer goes through as it responds to a Jeopardy! clue. Watson’s on-stage persona shares the graphic structure and tonality of the IBM Smarter Planet logo, a symbol of the company's effort to make the world work better.


As discussed in a prior post, the White House Office of Science and Technology Policy (OSTP) published a request for information (RFI) on AI back in June.  IBM released a response that was the subject of a very positive write-up on TechCrunch.  As the TechCrunch piece correctly notes, most of IBM’s responses were very informative and interesting.  They nicely summarize many of the key topics and concerns that are brought up regularly in the conferences I’ve attended.

But their coverage of the legal and governance implications of AI was disappointing.  Perhaps IBM was just being cautious because they don’t want to say anything that could invite closer government regulation or draw the attention of plaintiff’s lawyers, but their write-up on the subject was quite vague and somewhat off-topic.

» Read more

Could we be entering an AI-powered arms race in cyberwarfare?

Soon to be obsolete?


Much has been made about the possibility of AI-powered autonomous weapons becoming a factor in conventional warfare in the coming years.  But in the sphere of cyber-warfare, AI is already starting to play a major role, as laid out in an article in this week’s Christian Science Monitor.

Many nations–most notably Russia and China–already employ armies of hackers to conduct operations in the cybersphere against other countries.  The US Department of Defense’s response might be a harbinger of things to come:

[T]he allure of machines quickly fixing vulnerabilities has led the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s technology lab, to organize the first-ever hacking competition that pits automated supercomputers against each other at next month’s Black Hat cybersecurity conference in Las Vegas.

With the contest, DARPA is aiming to find new ways to quickly identify and eliminate software flaws that can be exploited by hackers, says DARPA program manager Mike Walker.

“We want to build autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process,” said Walker.

One of the big concerns about deploying autonomous weapon systems (AWSs) in the physical world is that it will lead to an arms race.  Starting in the Cold War, the development of more advanced missile defense systems spurred the development of more advanced missiles, which in turn led to the development of even more advanced missile defense systems, and so on.  It is easy to see how the same dynamic would play out with AWSs: because AWSs would be able to react on far shorter timescales than human soldiers, the technology may quickly reach a point where the only effective way to counter an enemy’s offensive AWS would be to deploy a defensive AWS, kickstarting a cycle of ever-more-advanced AWS development.

The fear with AWSs is that it might make human military decisionmaking obsolete, with human commanders unable to intervene quickly enough to meaningfully affect combat operations between AWSs.

The cyberwarfare arena might be a testing ground for that “AI arms race” theory.  If state-backed hackers respond to AI-powered cybersecurity systems by developing new AI-powered hacking technologies, what happens next might prove an ominous preview of what could happen someday in the world of physical warfare.

Digital Analogues (Part 3): If AI systems can be “persons,” what rights should they have?


The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

» Read more

On Robot-Delivered Bombs

A Northop Grumman Remotec Andros, a bomb-disposal robot similar to the one reportedly used by police to end the Dallas standoff.


“In An Apparent First, Police Used A Robot To Kill.”  So proclaimed a Friday headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson, the Army veteran who shot 12 police officers and killed five of them on Thursday night.  Johnson had holed himself up in a garage after his attack and told police negotiators that he would kill more officers in the final standoff.  As Dallas Police Chief David Brown said at a news conference on Friday morning, “[w]e saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was.  Other options would have exposed our officers to grave danger.”

The media’s coverage of this incident generally has glossed over the nature of the “robot” that delivered the lethal bomb.  The robot was not an autonomous weapon system that operated free of human control, which is what most people picture when they hear the term “killer robot.”  Rather, it was a remote-controlled bomb disposal robot (that was sent, ironically, to deliver and detonate a bomb rather than to remove or defuse one).  Such a robot operates in more or less the same manner as the unmanned aerial vehicles or “drones” that have seen increasing military and civilian use in recent years.  As with drones, there is a human somewhere who controls every significant aspect of the robot’s movements.

Legally, I don’t think the use of such a remote-controlled device to deliver lethal force presents any special challenges.  Because a human is continuously in control of the robot–albeit from a remote location–the lines of legal liability are no different than if the robot’s human operator had walked over and placed the bomb himself.  I don’t think that entering the command that detonates a robot-delivered bomb is any different from a legal standpoint than a sniper pulling the trigger on his rifle.  The accountability problems that arise with autonomous weapons simply are not present when lethal force is delivered by a remote-controlled device.

» Read more

1 2 3 4