Researchers have found that artificial intelligence (AI) could inadvertently cause malware attacks via a chatbot called ChatGPT. The chatbot is known to generate plausible but fictional information based on its exposure to vast amounts of textual data, which happen because the AI gives biased and insufficient responses. Threat actors can create ChatGPT-recommended, yet malicious, code packages based on AI package hallucinations, which a developer could download unintentionally while using the chatbot. Attackers might ask ChatGPT for coding help and receive a recommendation for an unpublished or non-existent package. Attackers can then publish their malicious version of the suggested package, and wait for ChatGPT to give legitimate developers the same recommendation. The malicious code can then make its way into a legitimate application or code repository, creating a risk for the software supply chain. To detect bad code before it gets incorporated into an application or published, developers can validate the libraries they download from ChatGPT. Checking the creation date, downloads’ and comments’ number, ratings, and library notes, among other things, can help detect issues. ChatGPT is popular, and experts think its security risk might be overhyped. However, millions of people have embraced ChatGPT at work, creating a major opportunity for attackers.
Developers for ChatGPT Hallucinations Open to Malware Attacks on Supply Chain
Date:
Updated: [falahcoin_post_modified_date]