ChatGPT, a large language model (LLM), is causing cybersecurity concerns due to its potential to generate mutating, or polymorphic, malware that evades endpoint detection and response (EDR) systems. Cybersecurity experts have demonstrated that by crafting an executable file that makes an API call to ChatGPT, the tool can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits challenging to detect. Prompt engineering is the practice of modifying input prompts to bypass content filters and retrieve a desired output from ChatGPT. BlackMamba and ChattyCat are two proof-of-concept programs that demonstrate how ChatGPT can be used to create polymorphic malware for various purposes. Governments worldwide are grappling with how to regulate AI, but experts suggest that the industry needs better explainability and observability for context into the system to avoid any harm the tool may bring.
ChatGPT’s Mutating Malware Outsmarts EDR for Undetected Attacks
Date:
Updated: [falahcoin_post_modified_date]