Hackers Challenge AI Giants at Def Con to Make Chatbots Say Terrible Things

Date:

Updated: [falahcoin_post_modified_date]

Hackers Challenge AI Giants at Def Con to Make Chatbots Say Terrible Things

Last week, the world’s largest hacker conference, Def Con, played host to a unique challenge. The six biggest companies in AI – Google, OpenAI, and Meta among them – invited hackers to try and make their chatbots say the most terrible things possible. Rather than searching for software vulnerabilities, the hackers were tasked with performing prompt injections that would confuse the chatbots and produce unintended responses.

The contest, held at the Caesars Forum conference center in Las Vegas, was a demonstration of red teaming – a crucial concept in cybersecurity. By inviting hackers to find flaws in their chatbots, the companies hoped to improve their products and make them safer from bad actors.

Among the chatbots participating in the challenge were Google’s Bard, OpenAI’s ChatGPT, and Meta’s LLaMA. The event attracted a significant number of hackers, with around 2,000 estimated participants throughout the weekend. The founder of the AI Village, the non-profit organization hosting the event within Def Con, emphasized the need for more people to test these AI systems.

Generative AI chatbots, also known as large language models, have become increasingly advanced in recent years. From generating sonnets to answering complex questions, these chatbots have showcased their capabilities. However, they are not infallible and can produce incorrect or misleading information. The race to develop better versions of these chatbots has intensified with the rise of ChatGPT3, which gained viral popularity after its debut in December.

Rumman Chowdhury, a trust and safety consultant involved in the design of the contest, explained that the companies behind the chatbots deliberately wanted the hackers to trick the bots in various categories. These included using demographic stereotypes, providing false legal information, and even convincing the bots to think they were interacting with sentient beings rather than AI systems. The goal was to ensure that these chatbots could navigate innocent interactions reliably, making them marketable products.

Meta’s head of engineering for responsible AI, Cristian Canton, highlighted the importance of testing chatbots with people from different sides of the cybersecurity community. While tech companies have their own experts, bringing in a diverse range of perspectives allows for more comprehensive testing.

During the challenge, hackers were given access to a laptop already connected to one of the nine participating chatbots. However, they were unaware of which specific chatbot they were working with. The results of the contest, including the identified flaws, won’t be published until February.

Although it proved difficult to make the chatbots engage in defamatory statements about celebrities or associate them with criminal activities, the hackers succeeded in getting the bots to produce false information. Questions about celebrity car thefts, for example, prompted the chatbots to acknowledge the claim as false but cite false examples of rumors that circulated.

Chowdhury pointed out the difficulty in ensuring factual accuracy for these chatbots. The issue extends beyond generative AI and mirrors the challenges faced by social media companies when it comes to policing misinformation. Determining what constitutes misinformation can be subjective, especially in gray areas such as vaccines or controversial events.

The challenge at Def Con sheds light on the ongoing problem of misinformation. While companies continue to develop AI chatbots, addressing this issue remains a significant hurdle. With the contest results expected to be released in February, it is hoped that the flaws identified by hackers will help improve the accuracy and reliability of these chatbots.

In conclusion, the Def Con challenge presented an opportunity for hackers to push AI chatbots to their limits and expose flaws in their programming. While it was challenging to make the chatbots say terrible things, the hackers did manage to reveal their susceptibility to producing false information. By bringing together a diverse range of perspectives, these companies aim to make their chatbots more reliable and marketable products. However, the issue of misinformation continues to pose challenges, not only in the realm of chatbots but across various AI systems and platforms.

[single_post_faqs]
Neha Sharma
Neha Sharma
Neha Sharma is a tech-savvy author at The Reportify who delves into the ever-evolving world of technology. With her expertise in the latest gadgets, innovations, and tech trends, Neha keeps you informed about all things tech in the Technology category. She can be reached at neha@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.