top of page

Omaha World Herald, August 29, 2021
Thomas Freeman, Dr. Aaron McKain, Libby Otto

We live today in a world powered by seemingly limitless technology. Decisions once made by humans — doctors, teachers, bankers, and employers — are now made by algorithms: sequences of automated instructions that analyze information and, allegedly, predict human behavior.
 

In 2021, the harms posed by algorithmic decision making are no longer theoretical: Academic evidence has proven Americans are being (wrongfully) denied access to education, health care, jobs, housing and even their freedom based on faulty computer predictions (using data that has been, secretly or overtly, collected on us as both individual citizens and as members of legally protected categories of identity.)
 

Even scarier — as reported by the Omaha World-Herald this month — algorithms are becoming widespread in criminal justice context: Michael Williams, a 65-year-old grandfather, was falsely accused of murder (and held in jail for almost a year based) based on Shotspotter technology known to misidentify noise as gunshots as much as 75% of the time.
 

Williams’ case is a disarming example of algorithmic error and manipulation. Being wrongfully accused, arrested, and jailed by biased or inaccurate machines — which once seemed like science fiction — has quickly become a terrifying Black Mirror reality for many U.S. citizens And since none of us are immune from algorithmic injustice, it is time for us all to take a stand.

This is why, since 2019, the Institute for Digital Humanity — a student-run non-profit in Minneapolis — has emerged as a national leader on cross-cultural, bipartisan solutions to digital ethics education and reform, bringing together diverse coalitions of universities, advocates and civic leaders to decide when and how algorithms should (and should not) be used. The fact that the IDH has united Christians, the ACLU of MN, the Anti-Defamation League, BIPoC artists, Republicans and Democrats on algorithmic justice proves this is the civil rights issue of our time.

And it is time for all of us to come off the sidelines and join the fight for post-digital justice, ethics, and humanity.
 

Let’s start with some quick background. Algorithms are increasingly used in legal proceedings: identifying suspects’ faces, predicting people’s “criminality,” surveilling political protestors, and even determining parole, probation and criminal sentencing. The problem is algorithms are not infallible: Every algorithm has known error rates. (Despite fancy branding by Silicon Valley, algorithms are just statistical correlations.). And while imperfect correlations may be fine for Amazon or Netflix recommending books and movies, in the criminal justice system algorithmic sloppiness has catastrophic costs for the humans caught in these math equations.

If you think Michael Williams experience with Shotspotter was an anomaly, let us remind you of Nijeer Parks (11 days in jail for a false facial recognition match) and Robert Williams (arrested in front of his wife and daughter via false facial recognition) as just two more public examples of the — literally — countless victims.
 

Each of these men was falsely accused of committing a crime, in whole or in part, because of algorithmic processes with known biases and error rates. That these men were also African American is not a coincidence. The impact of algorithmic errors will most often and harshly impact people of color. (Facial recognition technology, for example, is known to have a higher error rate for darker faces.) And even beyond error rates, algorithmic bias is rampant. Algorithms that claim to “predict” crime are simply using historical data (about a suspect’s identity or location) to make guesses about the future. Which means the criminal justice system’s problematic historical treatment of citizens of color — ironically and tragically — becomes today’s “encoded evidence” of criminality (which is why legal scholar Margaret Hu has dubbed this high-tech system of racial discrimination “Algorithmic Jim Crow.”)
 

Sadly, in all of the cases listed above, it seems that law enforcement was willing to trust the judgment of a computer program more than the word of a Black man. Even worse, lawyers defending suspects who are “algorithmically accused” often cannot challenge the robot’s assessment. This is known as the black box” problem: Some decisions algorithms make are so complex even their programmers can’t explain them. And companies that design these “proprietary” law enforcement algorithms are often unwilling to reveal how they work.

Imagine being charged with a crime, deprived of your freedom, and then denied the ability to challenge the algorithm that accused you. This is a clear threat to the justice system’s principles of equality, due process, and presumption of innocence.
 

No one is calling for the elimination of algorithms. They can be useful tools in many ways. But with basic constitutional rights being programmed away and innocent citizens being imprisoned, there is a desperate need for guardrails around how and when this technology is used. If you don’t stand up now, you could be next.
 

Professor Thomas Freeman (Creighton University), Dr. Aaron McKain (North Central University), and Ph.D. student Elizabeth Otto (University of Illinois) mentor the students at the Institute for Digital Humanity: www.institutefordigitalhumanity.org.

Untitled-4.png
bottom of page