Growing Cybersecurity Threat: ChatGPT Exploited by Cybercriminals, Stolen Accounts & Potential Malware

Date:

Updated: [falahcoin_post_modified_date]

Growing Cybersecurity Threat: ChatGPT Exploited by Cybercriminals, Stolen Accounts & Potential Malware

As artificial intelligence technology continues to advance, the potential for misuse by cybercriminals also grows. According to a survey conducted by BlackBerry Global Research, 74% of IT decision-makers are concerned about the threat posed by ChatGPT to cybersecurity. Disturbingly, 51% of respondents believe that there will be a successful cyberattack attributed to ChatGPT by 2023.

Recent reports have shed light on significant cybersecurity issues and risks associated with ChatGPT. One such report, published by cybersecurity company Group-IB, revealed that over 100,000 ChatGPT accounts were stolen between June 2022 and March 2023. The majority of these stolen credentials, more than 40,000, originated from the Asia-Pacific region, followed by the Middle East and Africa (24,925), Europe (16,951), Latin America (12,314), and North America (4,737).

The motivations behind cybercriminals targeting ChatGPT accounts are twofold. Firstly, they seek access to paid accounts that do not have the limitations imposed on free versions. However, the greater concern lies in account spying. ChatGPT retains detailed histories of interactions, including prompts and answers. This means that compromised accounts could potentially expose sensitive data to fraudsters.

Dmitry Shestakov, head of threat intelligence at Group-IB, highlighted the risks associated with integrating ChatGPT into enterprise operations. He explained, Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.

Another cybersecurity company, SlashNext, uncovered a concerning trend in the underground cybercriminal forums. The trade of jailbreak prompts has been on the rise, allowing attackers to bypass ChatGPT’s protective measures and create malicious content using the AI. This opens the door for cybercriminals to craft sophisticated phishing attacks, spam, and fraudulent content. The ability of ChatGPT to convincingly impersonate individuals or trusted entities increases the likelihood of deceiving unsuspecting users into divulging sensitive information or falling victim to scams.

The misuse of ChatGPT doesn’t stop at social engineering. It also poses a threat to the online spread of disinformation and fake news. Cybercriminals can use ChatGPT to generate and disseminate large volumes of misleading or harmful content, potentially fueling social unrest, political instability, and eroding public trust in reliable sources of information.

While ChatGPT has protocols in place to prevent the generation of prompts associated with malware or other illegal activities, security researchers have successfully bypassed these safeguards. One example is the creation of proof-of-concept malware named Black Mamba by cybersecurity company HYAS. Another researcher demonstrated how ChatGPT could be manipulated to bypass all rail-guards, resulting in advanced malware that went undetected by 69 antivirus engines.

In addition to these concerns, a new AI tool called WormGPT has surfaced in the Dark Web. It promises to be an alternative to ChatGPT for blackhat hackers, offering answers to prompts without ethical limitations. The developer of WormGPT remains secretive about the AI’s creation and training data. This tool raises further alarm about the potential for AI to be exploited for malicious purposes.

In response to these emerging threats, the usual security recommendations apply. Employees should receive comprehensive training to detect phishing emails and other social engineering attempts across various communication channels. It’s crucial to exercise caution when interacting with AI chatbots like ChatGPT and refrain from sharing confidential or sensitive information that could potentially be leaked.

In conclusion, as AI technology continues to advance, so do the risks associated with its misuse. The case of ChatGPT highlights the need for constant vigilance and robust cybersecurity practices. Efforts should be intensified to mitigate the potential harm caused by cybercriminals who exploit AI for their nefarious activities. By staying informed, implementing effective security measures, and fostering a cybersecurity-conscious culture, we can safeguard against the growing threat of AI-enabled cyberattacks.

[single_post_faqs]
Neha Sharma
Neha Sharma
Neha Sharma is a tech-savvy author at The Reportify who delves into the ever-evolving world of technology. With her expertise in the latest gadgets, innovations, and tech trends, Neha keeps you informed about all things tech in the Technology category. She can be reached at neha@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.