Guilty! AI Is Found to Perpetuate Biases in Jailing

MIT Technology Review looks at how technology is in the dark ages when it comes to criminal justice.

In an effort to take the human biases out of law enforcement, there has been an increasing reliance on technology — specifically artificial intelligence. But the verdict is out as to whether or not algorithms compound the problem.

The subject was discussed at the Data for Black Lives conference and profiled Monday by the MIT Technology Review, particularly  because the incarceration rates inside the United States are higher than anywhere else in the world. By the end of 2016, nearly 2.2 million adults are behind bars in prisons or jails, with an additional 4.5 million in other correctional facilities.

So there is a push to use more technology, such as predictive algorithms used by police to determine strategy and face recognition systems to help identify suspects. But the technology is not perfect.

An even bigger risk comes with risk assessment tools, which are “designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend,” as explained by MIT Technology Review.

Those scores determine how the suspects are treated through the criminal justice system, and in theory are more just than relying on a judge’s arbitrary instincts. The issue, however, is historical data can be used to make correlations that often leans towards institutional biases along class and racial lines.

“Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores,” wrote MIT Technology Review‘s Karen Hao. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.”

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.