
An interesting pair of stories popped up over the past month covering how the use of AI could affect bias in our society. This is a fascinating topic from a “law and AI” standpoint due to the sheer number of laws in place worldwide that prohibit certain forms of bias and discrimination in a variety of settings, ranging from employment to hotel accommodations to the awarding of government contracts.
At first blush, one might think that having an automated system make decisions would reduce the risk of bias, or at least those forms of bias that the law prohibits. After all, such a system would not be susceptible to the many of the most obvious types of biases and prejudices that afflict human decision-makers. A machine would not have a financial interest in the outcome of any decision (at least not yet), nor would it be susceptible to the dark impulses of racism and sexism. A machine likewise would presumably be less susceptible to, if not immune from, the more subtle and sometimes even unconscious manifestations of bias that emotion-driven humans exhibit.
Those advantages led Sharon Florentine to pen an article published last month in CIO with a bold headline: “How artificial intelligence can eliminate bias in hiring.” That title was probably clickbait to a certain extent because the article itself was fairly measured in its assessment of the potential impact of AI on workplace discrimination. The thesis of the article is that AI systems could be used indirectly to reduce bias by using machine learning to “be an objective observer to screen for bias patterns.” In other words, AI systems could act as something of a bias detector, raising alerts when a person or company’s decision-making patterns display signs of bias or prejudice.
Kristian Hammond over at TechCrunch, on the other hand, wrote an article indicating how AI systems can actually generate or reinforce bias. She goes over five potential sources of bias in AI systems:
- “Data-driven bias.” This occurs when a learning AI system that learns from a “training set” of data is fed a skewed or unrepresentative training set. Think of the Beauty.ai “pageant.”
- “Bias from interaction.” This occurs when a machine that learns from interactions with other users ends up incorporating those users’ biases. Tay the Racist Chatbot is an obvious example of this.
- “Emergent bias.” Think of this as self-reinforcing bias. It’s what happens when Facebook’s news feed algorithms recognize that a particular user likes reading articles from a particular political viewpoint and, because they are programmed to predict what that user might to read next, ends up giving the user more and more stories from that viewpoint. It seems to me that this is pretty much an extension of the first two types of bias.
- “Similarity bias.” Hammond’s description makes this sound very similar to emergent bias, using the example of Google News, which will often turn up similar stories in response to a user search query. This can often lead to many stories being presented that are written from the same point of view and excluding stories written from a contrary point of view.
- “Conflicting goals bias.” I honestly have no idea what this one is about. The example Hammond provides does not give me a clear sense of what this type of bias is supposed to be.
Hammond ended on a positive note, noting that knowledge of these potential sources of bias will allow us to design around them, stating, “Perhaps we will never be able to create systems and tools that are perfectly objective, but at least they will be less biased than we are.”
I have a feeling Hammond’s piece was meant to be much longer but ultimately was cut down for readability. I’d be interested to see a longer exploration of this subject because of the obvious legal implications of AI-generated bias…especially given that I will be writing a paper on the subject for this year’s WeRobot conference.