How Bias Can Destroy Lives

Jarod Davis
3 min readFeb 25, 2021

“A model, after all, is nothing more than an abstract representation of some process, be it a baseball game, an oil company’s supply chain, a foreign government’s actions, or a movie theater’s attendance.” This is how it’s described in Cathy O’Neil’s book, Weapons of Math Destruction.

At the end of the day, models are exactly that. They are simply a way that we can track variables in the world that we value and how they are related to other variables that we value. This can have beneficial effects as well as deadly effects. On the good side of this dichotomy, we have things like sports models where athletes can use these tools in order to gauge their own performance. Then they can use these tools to analyze opposing teams and figure out weak points and train for them. A farmer could use these models to figure out what chemicals at which time fertilize the ground the most with the least environmental impact. That same farmer could use these models to figure out how to maximize the crop yield in order to feed the most people. So under normal circumstances, statistical models seem like one of the greatest things since sliced bread. Unfortunately, normal circumstances show us something different.

In O’Neil’s book, she discusses multiple examples on how models are used in a destructive way — hence the term she coined, “weapons of math destruction”. After in the financial industry in New York, and following the Great recession, she saw just how messed up models can be, and how bias can turn a well intended model into a disaster. The model that stuck out to me the most was the model behind the LSI-R survey given to convicts in order to help judges weigh in on the convict’s likelihood to reoffend. This questionnaire asks personal questions in order to figure out the type of person someone is. These questions involve the number of previous convictions, the part they played, and so on. At the same time, it will also ask who the convict knows, how many of them have records, the first time they had a run in with the police, and so on. Questions like this are troubling considering where the most run ins with the police come from. It’s not an even distribution. Minorities in the inner city are fore more likely to get questioned and stopped by the police. These questions are fed into the model and of course the model will say that the convict should be imprisoned for longer, based on who he/she is and not based on what they did.

Transparency is a factor as well. Models used in sports teams help because the players know what they are being evaluated on and why. It allows a back and forth communication between those who make the model, and those who use it, and those who are affected by it. Keeping models entirely secret, and not allowing the end users (victims) to interact and provide feedback makes it hard for the model to be changed for a positive effect. If the model never learns the right way to go about things because people and developers don’t want to change it, how can they expect improved results? It’s the definition of insanity and it’s quite absurd. As a developer of a model in the financial industry, models have to be updated and adjusted to maximize income and profits, so why does it stop when it involves the quality of people’s lives?

--

--