Report: Most Facial Recognition Software 'Racist'

John Lister's picture

Facial recognition software may be less accurate when dealing with non-Caucasian faces, according to a new study. The National Institute of Standards and Technology (NIST) noted the problem was likely the data used to "train" algorithms.

The NIST examined 189 algorithms from 99 different developers, which it says is a majority of all commercially available systems. (Source: bbc.co.uk)

In the testing, it looked at two tasks. Specifically, they looks to see if two specific photographs are of the same person. In one example, it was to verify identity in order to unlock a phone or check a passport; the second was to spot any match between one photo and an entire database - for example, when reviewing live video footage to spot anyone who is a 'person of interest'.

Results Vary Across Globe

The study looked for two errors: false positives, where an algorithm wrongly says two pictures are of the same person; and false negatives, where an algorithm fails to spot that two pictures are of the same person.

Overall, the algorithms tested were more likely to give a false positive for Asian or African American faces compared with Caucasian faces when comparing two photographs. That could mean, for example, that facial recognition unlocks were less secure for non-Caucasian faces. With some algorithms, the error rate was 100 times greater for non-Caucasians.

The effect varied depending on the country where the algorithm was developed. For US algorithms, the results were less reliable for non-Caucasians, with particularly low accuracy for American Indians.

For algorithms from Asian countries, there was no significant difference between accuracy among Asian and Caucasian faces. That could mean the problem isn't the algorithm as such, but rather the database of photos used to develop it.

Wrongful Accusations Possible

When comparing a single photograph against a database, the study found algorithms were particularly likely to give a false positive for faces of African American females. That could lead to false accusations.

There was a big variation in the quality of algorithms across the study, however. The researchers noted that the less accurate an algorithm was overall, the more likely it was to have significantly different levels of accuracy across different demographics. (Source: nist.gov)

What's Your Opinion?

Are you surprised by the results? Should companies and governments use facial recognition if it is less accurate for some groups of people? Should algorithms have to pass an accuracy test before they can be used by public bodies?

Rate this article: 
Average: 5 (6 votes)

Comments

Navy vet's picture

It should be used as an investigative tool, like the polygraph.

ronangel1's picture

This guy designed masks to confuse Facial recognition systems.This picture I took of exhibit at exhibition of his work some time back.
www.ssrichardmontgomery.com/download/facemod.JPG

Jim-in-kansas's picture

Facial recognition software is here to stay. It has, like most newer technology, growing pains that will have to be overcome to make "FR" the product that the Sci-fi people already have us believing it is.
I watched, and loved, every episode of "Person of Interest".

I watch a lot of British Telly, too. From watching that you would get the impression that 100% of the UK is under video watch...

FR is here and its here to stay.
I do agree that some International Standards for the algorithms and databases used should be in place for commercial and Law Enforcement use.

Of course NSA (No Such Agency)at Fort Meade has ZERO interest in this field :-)

Regards,
Jim-in-Kansas
05K USASA 70-73
Torii Station