University at Buffalo researchers have developed new deepfake detection algorithms that aim to reduce bias and improve accuracy. The algorithms, created by computer scientist Siwei Lyu and his team, specifically address the issue of racial and gender disparities in deepfake detection. Lyu’s detection algorithms previously showed a bias towards incorrectly classifying faces with darker skin tones as fake. The team developed two machine learning methods – one that makes algorithms aware of demographics and another that keeps them blind to demographics – to tackle this bias. Their methods resulted in reduced disparities across races and genders, while also improving overall accuracy. The research was supported in part by the U.S. Defense Advanced Research Projects Agency. These new algorithms have the potential to minimize errors and increase fairness in deepfake detection.
New Deepfake Detection Algorithms Address Bias in Facial Recognition
Date:
Updated: [falahcoin_post_modified_date]