Phishing Attacks Skyrocket with the Rise of Generative AI

Date:

Updated: [falahcoin_post_modified_date]

Phishing Attacks Surge with the Emergence of Generative AI

In recent years, generative AI has rapidly transformed various aspects of everyday life. Unfortunately, along with its mainstream success, there has been a sharp increase in phishing scams utilizing this technology. A new report from cybersecurity firm SlashNext reveals that phishing emails have skyrocketed by a staggering 1,265% since the launch of ChatGPT.

Cybercriminals are taking advantage of generative AI tools like WormGPT, Dark Bart, and FraudGPT, which are spreading on the dark web. Additionally, they are finding innovative ways to exploit OpenAI’s flagship AI chatbot through jailbreaking. The release of ChatGPT in February triggered a significant surge in phishing attacks, driven by overall success in the cybercrime landscape.

Phishing attacks typically occur in the form of deceptive emails, texts, or social media messages that appear to originate from reputable sources. These attacks can also direct victims toward malicious websites that deceive them into unknowingly authorizing transactions via their crypto wallets, resulting in fund depletion.

According to SlashNext’s report, in the last quarter of 2022 alone, a staggering 31,000 phishing attacks were dispatched daily, showcasing a 967% surge in credential phishing. Business email compromise (BEC) attacks accounted for 68% of all phishing attacks, while 39% of mobile-based attacks were executed through SMS phishing (smishing).

While there has been ongoing debate regarding the true impact of generative AI on cybercriminal activity, research demonstrates that threat actors are employing tools like ChatGPT to craft sophisticated and targeted phishing messages. Patrick Harr, CEO of SlashNext, highlights that phishing attacks essentially rely on link-based strategies to extract victims’ credentials, such as usernames and passwords. Moreover, successful phishing attacks can potentially lead to the installation of persistent ransomware. The infamous Colonial Pipeline attack, for example, originated from a credential phishing attack that enabled cybercriminals to obtain a user’s login information.

As cybercriminals increasingly utilize generative AI to target victims, Harr recommends that cybersecurity professionals adopt an offensive approach by leveraging AI to combat AI. He emphasizes the importance of integrating AI directly into security programs to continually scan messaging channels and identify potential threats. SlashNext utilizes generative AI in their own tool sets to detect and predict the occurrence of phishing attacks, aiming to proactively protect against them.

However, Harr acknowledges that relying solely on instructing ChatGPT to identify cyber threats is insufficient. To effectively tackle the evolving landscape of phishing attacks, Harr suggests developing dedicated large language model applications capable of recognizing nefarious threats.

While AI developers like OpenAI, Anthropic, and Midjourney have implemented safeguards to prevent their platforms from being misused for malicious purposes such as phishing attacks and misinformation dissemination, resourceful hackers are determined to circumvent these measures.

Recently, the RAND Corporation published a concerning report, suggesting that terrorists could exploit generative AI chatbots to learn about carrying out biological attacks. Although the chatbot did not explicitly provide instructions on building biological weapons, specific prompts could coerce the chatbot into explaining the precise procedures.

Researchers have also discovered that employing less commonly tested languages like Zulu and Gaelic grants them the ability to hack ChatGPT and elicit explanations on how to successfully commit criminal activities, such as robbing a store.

In an effort to enhance security measures, OpenAI launched an open call for offensive cybersecurity professionals, referred to as Red Teams, to help identify potential vulnerabilities in their AI models.

Harr concludes by stating that companies must reassess their security postures and adopt generative AI-based tools to effectively detect, respond to, and prevent potential phishing attacks. By actively deploying AI technologies, organizations can proactively safeguard against emerging threats and keep themselves one step ahead of cybercriminals.

[single_post_faqs]
Neha Sharma
Neha Sharma
Neha Sharma is a tech-savvy author at The Reportify who delves into the ever-evolving world of technology. With her expertise in the latest gadgets, innovations, and tech trends, Neha keeps you informed about all things tech in the Technology category. She can be reached at neha@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

Revolutionary Small Business Exchange Network Connects Sellers and Buyers

Revolutionary SBEN connects small business sellers and buyers, transforming the way businesses are bought and sold in the U.S.

District 1 Commissioner Race Results Delayed by Recounts & Ballot Reviews, US

District 1 Commissioner Race in Orange County faces delays with recounts and ballot reviews. Find out who will come out on top in this close election.

Fed Minutes Hint at Potential Rate Cut in September amid Economic Uncertainty, US

Federal Reserve minutes suggest potential rate cut in September amid economic uncertainty. Find out more about the upcoming policy decisions.

Baltimore Orioles Host First-Ever ‘Faith Night’ with Players Sharing Testimonies, US

Experience the powerful testimonies of Baltimore Orioles players on their first-ever 'Faith Night.' Hear how their faith impacts their lives on and off the field.