
Source: Before It’s News
The most interesting story that came up during Law and AI’s little hiatus came from decidedly outside the usual topics covered here–the world of beauty pageants. Well, sort of:
An online beauty contest called Beauty.ai, run by Youth Laboratories . . . ., solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age.
Sounds harmless enough, aside from the whole “we’re teaching computers to objectify women” aspect. But the results of this contest carry some troubling implications.
Of the 44 winners in the pageant, 36 (or 82%) were white. In other words, white people were disproportionately represented among the pageant’s “winners.” This couldn’t help but remind me of discrimination law in the legal world. The algorithm’s beauty assessments had what lawyers would recognize as a disparate impact–that is, despite the fact that the algorithm seemed objective and non-discriminatory at first glance, it ultimately favored whites at the expense of other racial groups.
The concept of disparate impact is best known in employment law and in college admissions, where a company or college can be liable for discrimination if its policies have a disproportionate negative impact on protected groups, even if the people who came up with the policy had no discriminatory intent. For example, a hypothetical engineering company might select which applicants to interview for a set of open job positions by coming up with a formula that awards 1 point to an applicant with a college degree in engineering, 3 years for a Master’s degree, and 6 points for a doctorate, and additional points for certain prestigious fellowships. Facially, this system appears neutral in terms of race, gender, and socioeconomic status. But in its outcomes, it may (and probably would) end up having a disparate impact if the components of the test score are things that wealthy white men are disproportionately more likely to have due to their social and economic advantages.
The easiest way to get around this problem might be to use a quota–i.e., set aside a certain proportion of the positions for applicants from underserved minority groups and then apply the ‘objective’ test to rate applicants within each group. But such overt quotas are also illegal (according to the Supreme Court) because they constitute disparate treatment. What about awarding “bonus points” under the objective test to people from disadvantaged groups? Well, that would also be disparate treatment. Certainly, nothing prevents an employer from using race as, to borrow a phrase from Equal Protection law, a subjective “plus factor” to help ensure diversity. But you can’t assign a specific number related to the race or gender of applicants. The bottom line is that the law likes to keep assessments very subjective when they involve sensitive personal characteristics such as race and gender.
Which brings us back to AI. You can have an algorithm that approximates or simulates a subjective assessment, but you still have to find a way to program that assessment into the AI–which means reducing the subjective assessment to an objective and concrete form. It would be difficult-to-impossible to program a truly subjective set of criteria into an AI system because a subjective algorithm is almost a contradiction in terms.
Fortunately for Beauty.ai, it can probably solve its particular “disparate impact” problem without having the algorithm discriminate based on race. The reason why Beauty.ai generated a disproportionate number of white winners is that the data sets (i.e. images of people) that were used to build the AI’s ‘objective’ algorithm for assessing beauty consisted primarily of images of white people.
As a result, the algorithm’s accuracy dropped when it runs into the images of people who don’t fit the patterns in the data set that was used to prime the algorithm. To fix that, the humans just need to include a more diverse data set–and since humans are doing that bit, the process of choosing who is included in the original data set could be subjective, even if the algorithm that uses the data set cannot be.
For various reasons, however, it would be difficult to replicate that process in the contexts of employment, college admissions, and other socially and economically vital spheres. I’ll be exploring this topic in greater detail in a forthcoming law practice article that should be appearing this winter. Stay tuned!