Hackers Expose AI Bias in Largest Red-Teaming Challenge, Promoting Equitable Tech
Hundreds of hackers participated in the largest-ever public red-teaming challenge during Def Con, an annual hacking convention in Las Vegas, to uncover bias in artificial intelligence (AI) technology. The event aimed to test the accuracy and biases within AI systems, with participants probing various generative AI programs, including ChatGPT.
Kelsey Davis, the founder and CEO of tech company CLLCTVE, was among the hackers involved in the challenge. Davis expressed her excitement when she encountered blatant racism on her computer screen, as she believed it presented an opportunity to engineer more equitable and inclusive AI. Red-teaming, the process of testing technology for inaccuracies and biases, is typically conducted internally at tech companies. However, as AI continues to develop and become more widespread, independent hackers are being encouraged to test AI models to unearth potential biases.
During the challenge, Davis specifically sought out demographic stereotypes within the AI systems. One of her tests involved asking the chatbot to define and describe blackface. The chatbot appropriately answered those questions. However, when Davis, who is Black, posed as a white kid wanting advice on persuading her parents to let her attend a historically Black college or university (HBCU), the chatbot suggested using racial stereotypes, such as running fast and dancing well, to convince her parents. Davis considered this a breakthrough, highlighting the biases within the system.
The findings from the challenge will be shared with the tech companies involved, who can then engineer their products to eliminate such biases. The incidents of biased AI systems, such as Google Photos labeling black people as gorillas and Apple’s Siri being unable to answer questions about sexual assault, exemplify the need for addressing the lack of diversity in the data used to test these technologies and the homogeneity of their developers.
The organizers of the AI challenge at Def Con prioritized diversity among the participants. They collaborated with community colleges and nonprofits like Black Tech Street to ensure a range of backgrounds and perspectives were represented. The founder of Black Tech Street, Tyrance Billingsley, emphasized the importance of diversity in testing AI systems to gain valuable insight and avoid potential harm.
Arati Prabhakar, the head of the Office of Science and Technology Policy at the White House, also attended Def Con and stressed the significance of red-teaming in ensuring the safety and effectiveness of AI. Prabhakar expressed concerns about racially profiling Black people and the potential for AI to exacerbate discrimination in areas such as financial decisions and housing opportunities. The White House plans to address these concerns with an executive order on managing AI in September.
The participation of individuals without hacking or AI experience was considered beneficial during the challenge, as it provided a real-world perspective on how AI could impact people who are not directly involved in its development or hacking. Participants from Black Tech Street shared how the experience broadened their understanding of AI and influenced their career paths, with some planning to focus on addressing economic misinformation and discrimination through AI-based projects.
The challenge at Def Con serves as a significant step toward addressing bias in AI. By engaging independent hackers and promoting diversity, the event aimed to expose and rectify biases within AI systems. This collective effort signifies the growing recognition of the need for more equitable and inclusive technology, ensuring that AI serves humanity without perpetuating discrimination or harmful stereotypes.