The Intelligence is Artificial. The Bias Isn’t.

In 2002, the Wilmington, Delaware police department made national news when it decided to employ a new technique – “jump out squads.”  The police would drive around the city in vans, jump out in high crime areas, and take pictures of young people.  The officers engaged in these impromptu photo sessions to create a database of future criminals. 

If this plan sounds offensive, imagine if it were aided by facial recognition technology or other forms of artificial intelligence. 

Now, seventeen years after the Wilmington Police used vans and Polaroids, police have artificial intelligence at their disposal.  Police departments use AI in a variety of ways and for a variety of purposes.  Crime forecasting – also known as predictive policing – has been used by police in New York, Los Angeles, and Chicago.  Video and image analysis are used by many departments.  While AI might make law enforcement easier, the legal profession needs to keep a careful eye to make sure that AI doesn’t compound the disparities that already exist in criminal justice and other areas of the legal system.

AI and bias – Or, How AI Misses the Picture

Facial recognition and other types of AI may seem innocuous.  After all, every human has the same basic body and face.  But when AI technologies are used to classify people of different races, trouble often follows.

Examples of racial bias in AI abound.  Word embedding technologies have shown bias toward European names. Camera technologies have assumed that Asian subjects are blinking.  Image algorithms have labelled Indian brides as performers and classified African Americans as gorillas.  Facial recognition platforms have failed to recognize people with darker skin tones. Clearly, AI struggles to recognize and respect people from different races and cultures. 

Though race is a problem, AI’s bias shows in other areas as well.  Résumé-sifting technologies have been shown to rank women’s resumes lower than men’s.  Many complain that assistants such as Alexa, Siri, and Cortana are programmed to behave in stereotypically feminine ways. Ad technologies prioritize straightness over gayness.  In China, AI has been used to identify Muslims.  Though AI has many helpful applications, its ability to discriminate on the basis of race, sex, color, religion, sexual orientation, and other factors cannot be ignored.

Why AI’s Biases Matter

The fact that AI marginalizes some groups is bad enough.  But the real-world implications of biased AI go much farther and cause much more harm.  AI can increase the already troubling racial imbalances in the justice system. 

Already, there is evidence that AI furthers the historical discrimination against African Americans, Latinx, and other marginalized groups in the criminal justice system.  Facial recognition software created to recognize criminals is so flawed that it matched members of Congress with criminal mugshots.  Because facial recognition is already poor at classifying darker-skinned faces, the risk for people of color is far higher.

Beyond facial recognition, AI hurts supervision and release.  Many courts use risk-assessment technology to evaluate a defendant’s risk of flight, risk of reoffending, and other factors.  ProPublica highlighted the problems with this tech by comparing two defendants, both convicted of petty theft.  The first, a Black woman with a slight juvenile record, was given a risk score of 8.  Meanwhile, a white man who committed two armed robberies, one attempted armed robbery, and one grand theft was scored as a risk level 3.  ProPublica also found that the software predicted recidivism in black inmates twice as often as white ones.  Because bias in AI can cost a person her freedom, bias in its development and design matters.

Fixing AI’s Bias Problem

Though bias in AI causes a multitude of problems, luckily, there are solutions.  The solutions must involve the government and private sector. 

To overcome bias, tech companies must create a workforce that is committed to diversity.  The first step in this process is diversifying that workforce.  The tech sector’s lack of diversity is well-documented.  I don’t believe that many – or even the majority – of programmers intend to be biased.  But we all harbor implicit biases.  To paraphrase the old programming adage, “Bias in, bias out.”  So, it’s unsurprising that companies staffed primarily by white men would fail to recognize the ways that their software cause problems for women, people of color, and other groups.  Putting more diverse people in the room will not only lead to reduced bias, but also better products and increased innovation. 

Government also has a role to play here.  Many courts rely on AI to make risk assessments.  Because AI can cost a person her freedom, courts must ensure that any software they use has been reviewed for potential biases.  Failing to do so would cause courts to fail at their mission of providing true justice for all.  Police departments should undertake similar assessments.  Judges and police officers should be regularly updated on the latest developments in AI, including any potentially biased software or applications.  

Congress and state legislatures have a role to play as well.  Recently, Senator Cory Booker and Representative Yvette Clarke introduced legislation that would require the FTC to evaluate algorithms for possible biases.  However, the current law simply directs the companies to remedy the problem. Despite its gentleness, the current law is a good start.  Perhaps future laws will allow Congress to penalize repeat offenders, deny research funding, or create a private cause of action.  The law might also prompt action at the state and local levels.

AI is not perfect, but apparently, it’s here to stay.  That’s not necessarily a bad thing.  It seems that every day, new uses for AI are discovered.  As these life-changing advances happen, hopefully biases will not overshadow the good that AI can do. 


Nareissa Smith is a graduate of Spelman College and Howard University School of Law.  After completing two judicial clerkships, Nareissa worked as a law professor for over ten years.  Her courses included Constitutional Law, Criminal Procedure, and Critical Race Theory.  Now, Nareissa works as a freelance journalist.  You can reach her at nareissa.smith@gmail.com or contact her via Twitter (@NareissasNotes).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.