GPT-3’s Mistakes & Harmful Misinformation Revealed in Waterloo Study, Canada

Date:

Updated: [falahcoin_post_modified_date]

Large language models validate misinformation, finds study

Researchers from the University of Waterloo recently conducted a study on the interpretation abilities of ChatGPT, an early version of OpenAI’s language model. The goal was to evaluate the model’s performance in distinguishing between facts, misinformation, and other types of statements. The findings revealed that ChatGPT frequently provided incorrect information, often contradicting itself within a single answer and perpetuating harmful misinformation.

The study involved testing ChatGPT’s interpretation of statements in six categories: facts, conspiracies, disputes, misconceptions, stereotypes, and fiction. More than 1,200 different statements were examined using four different inquiry templates. The researchers discovered that the model agreed with incorrect statements between 4.8 percent and 26 percent of the time, depending on the category.

The researchers emphasized the ongoing relevance of their study, highlighting that many other large language models are trained on the output of OpenAI models like GPT-3. This recycling of information contributes to the repetition of flaws and inaccuracies found in the Waterloo study.

Dan Brown, a professor at the David R. Cheriton School of Computer Science, commented on the issue, stating, There’s a lot of weird recycling going on that makes all these models repeat these problems we found in our study.

The study also revealed the impact of slight wording changes on ChatGPT’s responses. Even a small phrase like I think before a statement could significantly affect the model’s agreement, regardless of the statement’s accuracy. The model’s answers were described as unpredictable and confusing, as it would sometimes provide contradictory responses.

The potential danger of large language models learning and perpetuating misinformation is a cause for concern, particularly as these models become increasingly prevalent. Aisha Khatun, the lead author of the study and a master’s student in computer science, emphasized the importance of addressing this issue. She stated, Even if a model’s belief in misinformation is not immediately evident, it can still be dangerous.

Trust in these systems is a fundamental question that needs to be addressed moving forward according to Brown. He added, There’s no question that large language models not being able to separate truth from fiction is going to be the basic question of trust in these systems for a long time to come.

The study, titled Reliability Check: An Analysis of GPT-3’s Response to Sensitive Topics and Prompt Wording, was published in Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing.

[single_post_faqs]
Tanvi Shah
Tanvi Shah
Tanvi Shah is an expert author at The Reportify who explores the exciting world of artificial intelligence (AI). With a passion for AI advancements, Tanvi shares exciting news, breakthroughs, and applications in the Artificial Intelligence category. She can be reached at tanvi@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.