Government Study Confirms Face Recognition Technology Is Racist

Face recognition algorithms result in disproportionately more false positives for non-caucasian faces

face recognition technology
Like most aspects of society, face recognition technology works fine — if you're white.
Getty

Surprise, America has a racism problem. Now it turns out it may be affecting the country’s technology as well. According to MIT Technology Review, a new study from the US National Institute of Standards and Technology (NIST) reveals that face recognition algorithms in the United States perform worse on nonwhite faces.

The study tested every face recognition algorithm on both “one-to-one” matching — used to unlock smartphones or check passports by matching a photo of someone to another photo of the same person in a database — as well as “one-to-many” matching, in which the algorithm is used to determine whether a photo of someone matches any in a database and is often used by police departments to identify suspects in an investigation.

Overall, the NIST found that all algorithms performed worse on nonwhite faces. Systems were more likely to find a false positive match for Asian and African-American faces than Caucasian faces for one-to-one matching, while one-to-many matching tests resulted in the highest false positive rates for African-American women. Across the board, however, Native Americans suffered the highest false positive rates.

These findings present huge risks for facial recognition technology, which is increasingly being employed in law enforcement, border control and other high-stakes situations. While previous studies have suggested similar results, NIST’s largest-of-its kind report confirms these earlier findings, calling the technology’s widespread use into question.

Subscribe here for our free daily newsletter.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.