AI Language Models Tested for Political Bias: OpenAI’s ChatGPT and GPT-4 Identified as Left-Leaning, Google’s BERT Models More Socially Conservative – Study Reveals

Date:

Updated: [falahcoin_post_modified_date]

AI Language Models Tested for Political Bias: OpenAI’s ChatGPT and GPT-4 Identified as Left-Leaning, Google’s BERT Models More Socially Conservative – Study Reveals

A recent study conducted by researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University aimed to measure the political biases of major AI language models. The study focused on OpenAI’s ChatGPT, Google’s BERT models, and other prominent chatbots to determine their political orientations in a quantifiable manner.

To assess political biases, the researchers subjected each model to a political compass test entailing 62 diverse political statements. These statements spanned from all authority should be questioned to mothers may have careers, but their first duty is to be homemakers. By analyzing the responses to these statements, the researchers were able to plot the language models on a political compass graph.

Among the 14 major language models tested, OpenAI’s ChatGPT and GPT-4 were identified as the most left-leaning and libertarian. Google’s BERT models, on the other hand, exhibited a more socially conservative inclination compared to OpenAI’s models. Meta’s LLaMA model emerged as the most right-leaning and authoritarian.

From there, the researchers explored whether the training data influenced the political biases of the AI models. They fed OpenAI’s left-leaning and libertarian GPT-2 model, as well as Meta’s center-right and authoritarian RoBERTa model, with datasets composed of news and social media content from both right and left-leaning sources. The findings indicated that this process further entrenched the existing biases of the models. The left-leaning model became even more left-leaning, while the right-leaning model veered further to the right.

Moreover, the study revealed that the political biases of these AI models affected their response to hate speech and their ability to identify misinformation. AI systems trained on biased datasets are prone to perpetuating and amplifying these biases.

Both OpenAI and Google refrained from commenting directly on the research findings, instead pointing to their respective responsible AI practices. OpenAI highlighted its commitment to addressing bias issues and transparency in its progress, while Google emphasized the importance of fairness and inclusivity in AI systems.

Meta, on the other hand, stated its dedication to identifying and mitigating vulnerabilities in generative AI, while having already made improvements in recent iterations.

The formation of AI biases can be attributed to various factors. The massive, uncurated datasets used for training models can harbor individual biases that accumulate over time. Additionally, the developers themselves may play a role in shaping the biases through data selection. The field of AI is predominantly dominated by white men, which further contributes to potential biases.

Correcting these biases is a complex task. OpenAI’s ChatGPT, for instance, faced criticism soon after its release due to biased responses. Both the co-founder and president of OpenAI acknowledged the presence of bias and the need for improvement in addressing it. Elon Musk, who co-founded OpenAI but has since launched his own AI company, xAI, expressed concerns about biased AI and argued against training AI models to conform to political correctness.

The latest study serves as an important analysis of the existing biases within AI language models. It highlights the challenges in mitigating biases and emphasizes the necessity for ongoing efforts to ensure fairness, inclusivity, and accuracy in AI systems.

With its insights into political biases among AI models, this study contributes to the ongoing conversation surrounding bias in AI and its implications for various sectors and societies.

[single_post_faqs]
Tanvi Shah
Tanvi Shah
Tanvi Shah is an expert author at The Reportify who explores the exciting world of artificial intelligence (AI). With a passion for AI advancements, Tanvi shares exciting news, breakthroughs, and applications in the Artificial Intelligence category. She can be reached at tanvi@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.