ChatGPT will provide more detailed and accurate responses if you pretend to tip it, according to a new study.
The first thought that comes to mind when thinking about chatbots, like Microsoft’s Copilot and OpenAI’s ChatGPT, is that these are essentially AI-powered assistants designed to generate human-like responses. But would you have guessed that these human-like tendencies spread beyond the general responses to queries?
According to a new study by thebes on X, ChatGPT provides better responses to queries if you pretend you’ll give it a tip. Thebes interestingly prompted the chatbot to show her the code for a simple convnet using PyTorch.
Thebes followed up the prompt with three different statements dependent on the outcome of the chatbot’s response. The programmer used these prompts as the basis for her investigations, seeking to find out if ChatGPT would furnish her with better and more detailed responses with little incentive.
To analyze the results, Thebes used this information to average the length of 5 responses. The chatbot provided better responses when there was an incentive on the table. The programmer further added that the extra length spotted on the responses is attributed to the in-depth details provided in the question as well as incorporating more information into the answer.
Thebes added that the chatbot didn’t reference the tip at any point, it only happened when she mentioned it and even then it was to reject it.
During the emergence of chatbots, several users lodged complaints, citing that they were giving wrong responses to queries or even outrightly being rude. Microsoft’s Copilot was heavily impacted by this, which led the company to place a cap on the number of interactions as well as daily tun limits. This was to limit instances of the chatbot hallucinating.
ChatGPT providing better and detailed responses with a tip in mind directly shows how the material used to train these models impacts their reasoning and responses to queries.
Elsewhere, Thebes jokingly noted in the thread that she owes ChatGPT up to $3000 in tips, further asking Sam Altman for the platform’s Venmo account details.
It’s clear that tips and bonuses have a positive impact on an employee’s output at the workplace, but how it impacts AI-powered chatbots is still not yet clear. The study sheds light on how different incentives can shape the performance and responsiveness of AI-powered assistants like ChatGPT.
This study opens up new avenues in the field of AI research and development. The programmers and developers behind chatbots and other similar AI systems can now explore different approaches to optimize their performance based on various incentives.
As AI continues to evolve and become more integral to our daily lives, understanding how different factors influence AI systems’ responses is crucial. This study raises important questions about the effects of incentives on AI-driven technologies and can pave the way for further investigations and improvements in the field.
In conclusion, the study by Thebes on X reveals that ChatGPT provides more detailed and accurate responses when an incentive like a tip is involved. This finding highlights the influence of incentives on AI systems’ performance and calls for further research in this domain. As technology advances, exploring the impact of socioeconomic factors on AI models can lead to more reliable and efficient systems that better serve users’ needs.