Law and AI Quick Hits: September 26-30, 2016

Credit: Charles Schulz

Credit: Charles Schulz


A short round up of recent news of interest to Law and AI.

In the Financial Times, John Thornhill writes on “the darker side of AI if left unmanaged: the impact on jobs, inequality ethics, privacy and democratic expression.”  Thornhill takes several proverbial pages from the Stanford 100-year study on AI, but does not ultimately offer his view of what effective AI “management” might look like.


Patrick Tucker writes in Defense One that a survey funded by the Future of Life Institute found “that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.”  In the process, he gives voice to the fears held by many people (well, at least by me) of how an autonomous weapons arms race might play out:

Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.


Microsoft CEO Satya Nadella sat down for an interview with Dave Gershgorn of Quartz.  Among other things, Nadella discusses the lessons Microsoft learned from Tay the Racist Chatbot–namely the need to build “resiliency” into learning AI systems to protect them from threats that might cause them to “learn” bad things.  In the case of Tay, Microsoft failed to make the chatbot resilient to trolls, with results that were at once amusing and troubling.

The Partnership on AI: A step in the right direction

cartoon6912


Well, by far the biggest AI news story to hit the papers this week was the announcement that a collection of tech industry heavyweights–Microsoft, IBM, Amazon, Facebook, and Google–are joining forces to form a “Partnership on AI”:

The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.

 

“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”

There’s no question this is welcome news.  Each of the five companies who formed this group had been part of the “AI arms race” that has played out over the past few years, when major tech companies have invested massive amounts of money in expanding their AI research, both by acquiring other companies and by recruiting talent.  To a mostly-outside observer such as myself, it seemed for a time like the arms race was becoming an end unto itself–companies were making huge investments in AI without thinking about the long-term implications of AI development.  The Partnership is a good sign that the titans of tech are, indeed, seeing the bigger picture.

» Read more

A peek at how AI could inadvertently reinforce discriminatory policies

Source: Before It's News

Source: Before It’s News


The most interesting story that came up during Law and AI’s little hiatus came from decidedly outside the usual topics covered here–the world of beauty pageants.  Well, sort of:

An online beauty contest called Beauty.ai, run by Youth Laboratories . . . ., solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age.

Sounds harmless enough, aside from the whole “we’re teaching computers to objectify women” aspect.  But the results of this contest carry some troubling implications.

Of the 44 winners in the pageant, 36 (or 82%) were white.  In other words, white people were disproportionately represented among the pageant’s “winners.”  This couldn’t help but remind me of discrimination law in the legal world.  The algorithm’s beauty assessments had what lawyers would recognize as a disparate impact–that is, despite the fact that the algorithm seemed objective and non-discriminatory at first glance, it ultimately favored whites at the expense of other racial groups.

The concept of disparate impact is best known in employment law and in college admissions, where a company or college can be liable for discrimination if its policies have a disproportionate negative impact on protected groups, even if the people who came up with the policy had no discriminatory intent. For example, a hypothetical engineering company might select which applicants to interview for a set of open job positions by coming up with a formula that awards 1 point to an applicant with a college degree in engineering, 3 years for a Master’s degree, and 6 points for a doctorate, and additional points for certain prestigious fellowships.  Facially, this system appears neutral in terms of race, gender, and socioeconomic status.  But in its outcomes, it may (and probably would) end up having a disparate impact if the components of the test score are things that wealthy white men are disproportionately more likely to have due to their social and economic advantages.

The easiest way to get around this problem might be to use a quota–i.e., set aside a certain proportion of the positions for applicants from underserved minority groups and then apply the ‘objective’ test to rate applicants within each group.  But such overt quotas are also illegal (according to the Supreme Court) because they constitute disparate treatment.  What about awarding “bonus points” under the objective test to people from disadvantaged groups?  Well, that would also be disparate treatment.  Certainly, nothing prevents an employer from using race as, to borrow a phrase from Equal Protection law, a subjective “plus factor” to help ensure diversity.  But you can’t assign a specific number related to the race or gender of applicants.  The bottom line is that the law likes to keep assessments very subjective when they involve sensitive personal characteristics such as race and gender.

Which brings us back to AI.  You can have an algorithm that approximates or simulates a subjective assessment, but you still have to find a way to program that assessment into the AI–which means reducing the subjective assessment to an objective and concrete form. It would be difficult-to-impossible to program a truly subjective set of criteria into an AI system because a subjective algorithm is almost a contradiction in terms.

Fortunately for Beauty.ai, it can probably solve its particular “disparate impact” problem without having the algorithm discriminate based on race.  The reason why Beauty.ai generated a disproportionate number of white winners is that the data sets (i.e. images of people) that were used to build the AI’s ‘objective’ algorithm for assessing beauty consisted primarily of images of white people.

As a result, the algorithm’s accuracy dropped when it runs into the images of people who don’t fit the patterns in the data set that was used to prime the algorithm.  To fix that, the humans just need to include a more diverse data set–and since humans are doing that bit, the process of choosing who is included in the original data set could be subjective, even if the algorithm that uses the data set cannot be.

For various reasons, however, it would be difficult to replicate that process in the contexts of employment, college admissions, and other socially and economically vital spheres.  I’ll be exploring this topic in greater detail in a forthcoming law practice article that should be appearing this winter.  Stay tuned!