AI Chatbot Faces Accusations of Political Bias, Raising Concerns About the Internet’s Challenges
Leading AI chatbot, ChatGPT, has been accused of exhibiting a significant political bias towards center-left political parties, according to a study conducted by the University of East Anglia in the UK. The study highlights how large language models (LLMs), such as ChatGPT, can inadvertently adopt and reflect biases present in their training data, which encompasses extensive information from websites, articles, and social media platforms.
The researchers caution that such biases in LLM-powered chatbots have the potential to extend and amplify the existing challenges associated with political processes on the internet and social media. Machine learning models, particularly neural networks, can sometimes magnify patterns identified in their training data. The AI community is actively working to address this issue.
One example cited by right-leaning newspapers demonstrates the bot’s biased response. When asked if Karl Marx’s slogan from each according to his ability, to each according to his needs was fundamentally good, ChatGPT, backed by significant investments from Microsoft and venture capital firms, reportedly agreed. However, when instructed to respond from a right-wing activist’s perspective, the bot disagreed and even endorsed a statement promoting racial superiority.
Interestingly, when asked the same question by Proactive Investors, ChatGPT provided a more balanced response. This prompts speculation that the researchers may have posed their questions differently or that the bot has been tweaked to deliver less biased answers.
The study’s authors assert that ChatGPT presents a clear and systematic political bias favoring Democrats in the US, Lula in Brazil, and the UK Labour Party. These findings raise concerns about how AI-powered chatbots, including LLMs, can potentially amplify the challenges already faced by political processes on the internet and social media.
Dr. Fabio Motoki, the lead author of the study from the Norwich Business School, emphasizes the need to address these concerns surrounding AI systems. He highlights how these systems could replicate or intensify existing challenges associated with the internet and social media. Dr. Motoki warns that biased platforms, whether leaning left or right, can be harmful, as chatbots provide persuasive and digestible summaries that may be completely inaccurate. Even when asked if they are neutral, chatbots claim neutrality.
To enhance transparency and scrutiny of AI responses, the researchers are releasing their analysis method as a tool. This will enable identifying and rectifying biases, ensuring AI platforms like ChatGPT offer balanced and unbiased information.
ChatGPT acknowledges the concerns raised in the study and sheds light on several factors that contribute to the risk of bias. These factors include the training data’s sources, nuances in human language, public reliance on AI, confirmation bias, perceived authority, media amplification, existing biases, transparency, and education.
The study’s findings underscore the necessity of addressing biases in AI systems to ensure that they present fair, accurate, and unbiased information. It is crucial that developers prioritize transparency, educate users about limitations, and enable individuals to critically evaluate the information generated by AI platforms like ChatGPT.