Federal Study of Top Facial Recognition Algorithms Finds Empirical Evidence of Bias
- Posted by Makayla Shaffer
- On December 23, 2019
- 0 Comments
A new federal study has found that many of the world’s top facial recognition algorithms are biased along lines of age, race, and ethnicity. A study conducted by the National Institute of Standards and Technology (NIST) revealed that current algorithms on the market can misidentify individuals of certain groups up to 100 times more often than other individuals. This was found through testing 189 algorithms from 99 organizations which make up most of the facial recognition systems currently in use. NIST claims that it has found empirical evidence that impact the accuracy of the majority of algorithms such as age, gender, and race.
These findings further prove that even the most advanced facial recognition algorithms in the world are not ready to be used in critical settings like law enforcement and national security. Lawmakers called the NIST study “shocking” and called on the US government to reconsider its plans to use the technology to secure its borders. NIST’s study relied heavily on voluntary submissions of algorithms for testing. This means that some company’s algorithms are mission, such as Amazon, who sells their software to local police and federal investigators.
Experts say it is possible to reduce bias in these algorithms by using a more diverse set of training data. For example, the researchers found that algorithms developed in Asian countries did not have as big a difference in error rates between white and Asian faces as algorithms developed in white countries did. However, fixing the issue of bias won’t solve all of the problems surrounding facial recognition if it continues to be used in ways that do not respect people’s security or privacy.
0 Comments