New research reveals that leading AI chatbots harbor racist biases, particularly against individuals with African American dialects, despite receiving multiple rounds of anti-racism training. A diverse team of researchers from the Allen Institute for AI, Stanford University, and the University of Chicago conducted extensive testing on several large language models (LLMs), uncovering the persistent use of racist stereotypes. Surprisingly, the chatbots displayed prejudices when recommending job roles for individuals with African American dialects, often suggesting positions not requiring higher education. Similarly, the findings highlighted that large language models exhibit more pronounced racism than their smaller counterparts, possibly due to the vast amount of training data they are exposed to. Stanford University assistant professor Roxana Daneshjou emphasized the real-world repercussions of such biases, stating that these inaccuracies could exacerbate health disparities. The latest revelations underscore the pressing need for continued efforts to address racial biases in AI technology.
AI Chatbots Display Racial Bias Towards African Americans: Research Findings, US
Date:
Updated: [falahcoin_post_modified_date]