Researchers at Indiana University Bloomington have discovered a botnet called Fox8 operating on the social network platform X (formerly known as Twitter). This botnet is powered by ChatGPT, an advanced language model developed by OpenAI. The purpose of the botnet is to promote cryptocurrency websites by using ChatGPT to generate social media posts and replies that lure unsuspecting users to click on links directing them to crypto-hyping sites.
The Fox8 botnet consisted of 1,140 accounts, many of which used ChatGPT to create auto-generated content aimed at enticing individuals to engage with the promoted crypto sites. While the botnet was relatively large, the researchers believe it is just the tip of the iceberg, as large language models and chatbots have become popular tools for scammers. They suggest that for every detected campaign, there are likely many others that employ more sophisticated techniques.
What made the Fox8 botnet stand out was its lack of sophistication. The researchers identified the botnet by searching for a specific phrase commonly used by ChatGPT when responding to prompts on sensitive subjects. After manually analyzing the accounts, they were able to identify those operated by bots. However, the botnet was still able to post convincing messages promoting cryptocurrency sites, highlighting the potential threat posed by harnessing advanced chatbots for scams.
The researchers believe that there may be other botnets operating with more advanced configurations that are difficult to detect. These botnets, if properly configured, can deceive both social media platforms and users, making them more effective in spreading disinformation and manipulating social media algorithms to prioritize their content.
OpenAI, the organization behind ChatGPT, has not commented on the botnet at the time of this article. However, the company’s usage policy explicitly prohibits the use of its AI models for scams or disinformation.
ChatGPT and similar chatbots utilize large language models to generate text in response to prompts. With sufficient training data and feedback, these bots can respond in sophisticated ways to various inputs. However, they can also exhibit biases, generate hate speech, and disseminate false information.
The ChatGPT-powered botnet discussed in the study serves as a reminder of the potential dangers posed by AI-driven scams and disinformation campaigns. Its ability to deceive both platforms and users shows the need for improved detection and mitigation strategies. Furthermore, the researchers suggest that governments may already be developing or deploying similar tools for disinformation purposes.
In conclusion, the discovery of the Fox8 botnet powered by ChatGPT on the social platform X underscores the growing threat posed by AI-driven scams and disinformation campaigns. It highlights the need for increased vigilance, improved detection methods, and better regulation to protect users from falling victim to these manipulative tactics.