AI Predicts Human Rights Trial Outcomes With 79% Accuracy

November 24, 2016 5:00 am
Scene from the 2004 film adaptation of Minority Report starring, from left, Tom Cruise, Neal McDonough, and Colin Farrell (20th Century Fox/DreamWorks/Everett Collection)
Scene from the 2004 film adaptation of Minority Report starring, from left, Tom Cruise, Neal McDonough, and Colin Farrell (20th Century Fox/DreamWorks/Everett Collection)
Audience room of the European Court for Human Rights, in Strasbourg, eastern France. (Patrick Hertzog/AFP/Getty Images)
Audience room of the European Court for Human Rights, in Strasbourg, eastern France. (Patrick Hertzog/AFP/Getty Images)
AFP/Getty Images

 

Artificial intelligence can now predict human rights cases with 79% accuracy, part of an increasing trend of the computer-driven tech being applied in novel ways. The accuracy is impressive, but many are concerned it could lead to eliminating human judgment from the rule of law.

By scanning court documents, the algorithm was able to anticipate the judicial decisions made in the European Court of Human Rights (ECHR) trials with relative precision. The results were part of a study by an international group published in PeerJ Computer Science, an academic journal.

Led by Nikolaos Aletras, a Research Associate at the University of London’s Computer Science Department, the team of researchers argued that the machine-generated analysis provides a look at the most important parts of the judicial system, such as the schism of law interpretation among ECHR judges. In their paper’s conclusion, the team makes the case for the algorithm being used to improve the court’s efficiency in the future, given the backlog of ECHR cases awaiting trial. They argue their artificial intelligence parallels the decision making of most judges and that these patterns correlate with evidence found in the court documents.

That said, many many are concerned studies like this will pave the way for an algorithm-based judicial system absent the judgment of actual humans à la the fictional Minority Report—Phillip K. Dick’s sci-fi novel (adapted into a film and television show) about a future where crime are predicted and culprits are caught before they commit them. But the truth is, aspects of that plot are a reality today. Algorithms are already applied to policing and sentencing in the United States.

Scene from the 2004 film adaptation of Minority Report starring, from left, Tom Cruise, Neal McDonough, and Colin Farrell (20th Century Fox/DreamWorks/Everett Collection)
Scene from the 2004 film adaptation of ‘Minority Report’ starring, from left, Tom Cruise, Neal McDonough, and Colin Farrell (20th Century Fox/DreamWorks/Everett Collection)

 

Applying a wealth of historical crime statistics, police departments in New York, Los Angeles, Philadelphia, Miami, and other cities around the nation use mapping software that highlight areas of high risk—some as small as 500 feet by 500 feet. While the maps do not identify specific individuals, a few mapping systems, like the one provided by PredPol, generate predictions about when and where a crime is likely to occur. PredPol says on its website the software is designed for law enforcement to better allocate their resources.

As much as the historical data can be a saving grace, the key flaw in predictive crime data is its reliance on information that may be incorrect. The predictions are only as good as the data it relies on, which becomes a serious issue if that data’s based on biased policing practices. This shortcoming extends beyond policing to sentencing. Counties across the United States, such as Florida’s Broward County, use an algorithm to assess criminal offenders’ likelihood of recidivism and make sentencing recommendations based on it.

The stakes for these algorithms become greater still when they’re applied to aspects of decision making beyond the criminal justice system. Increasingly, tasks such as reviewing job candidates are at least being augmented, if not replaced entirely, by artificial intelligence.

A model of the human brain constructed of wires and ports. (Getty Images)
A model of the human brain constructed of wires and ports. (Getty Images)
Getty Images

 

Cathy O’Neil, a data scientist writing for The Guardian, explored this notion in her recent essay, “How algorithms rule our working lives.” Here’s the crux of her argument:

“Their popularity relies on the notion they are objective, but the algorithms that power the data economy are based on choices made by fallible human beings. And, while some of them were made with good intentions, the algorithms encode human prejudice, misunderstanding, and bias into automatic systems that increasingly manage our lives. Like gods, these mathematical models are opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, are beyond dispute or appeal.”

This flaw may ensure AI never replaces human judgment outright. If anything, it makes the latter more valuable, according to professor Ajay Agrawal of the University of Toronto. Agrawal said in a speech at a recent conference that while “machine intelligence is a substitute for human prediction,” it has greater potential serving as “a complement to human judgment, so the value of human judgment increases.”

To learn more on his take, watch Agrawal’s speech in the video below. Read the full study on artificial intelligence predicting human rights cases here. You can also read O’Neil’s full essay for The Guardian here. And if you’re curious about the confluence of artificial intelligence and criminal justice, you may find this Pro Publica investigation intriguing.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.