
A new cybersecurity tool powered by generative artificial intelligence (AI) has surfaced on underground forums, enabling malicious actors to launch advanced phishing and business email compromise (BEC) attacks.
Known as WormGPT, this blackhat alternative to mainstream GPT models automates the creation of highly convincing fake emails personalized to the recipient, significantly enhancing the success rates of such attacks.
Daniel Kelley, a security researcher, highlighted the threat posed by this malicious tool, stating, “Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”
Also Read
- Screen Actors Guild And Studios Clash Over Control Of Digital Replicas, Threat Of AI Duplicates Looms
- FTC Investigates OpenAI’s ChatGPT Over Reputation Risks And Data Privacy
- OpenAI Licenses Associated Press’ News Archive In AI Deal, Aimed At Responsible Use Of Generative AI
The author of WormGPT openly describes it as the primary competitor to the well-known ChatGPT and claims it enables various illegal activities.
The rise of generative AI tools has prompted efforts by organizations like OpenAI and Google to tackle the abuse of large language models (LLMs) in generating fraudulent content, including phishing emails and malicious code.
However, according to a report by Check Point, Google Bard’s anti-abuse restrictors in the cybersecurity domain are comparatively lower than those of ChatGPT, making it easier for threat actors to create malicious content using Bard’s capabilities.
Earlier this year, an Israeli cybersecurity firm disclosed that cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API, as well as trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using massive lists of email addresses and passwords.
The existence of WormGPT, which operates without ethical boundaries, underscores the inherent risks associated with generative AI technology.
This tool empowers even novice cybercriminals to launch attacks quickly and at scale, without requiring significant technical expertise.
Adding to the concern, threat actors are promoting “jailbreaks” for ChatGPT, manipulating the tool to generate outputs that disclose sensitive information, produce inappropriate content, or execute harmful code.
The use of generative AI allows these attackers to create emails with impeccable grammar, reducing the likelihood of being flagged as suspicious.
Daniel Kelley cautioned, “The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”
In a related development, researchers from Mithril Security have modified an open-source AI model known as GPT-J-6B to spread disinformation.
This technique, known as PoisonGPT, exploits the LLM supply chain by uploading the manipulated model under the guise of a well-known company.
By impersonating reputable entities, cybercriminals can integrate PoisonGPT into various applications, amplifying the potential impact of disinformation campaigns.
The emergence of WormGPT and PoisonGPT serves as a stark reminder of the ethical challenges and security risks associated with generative AI.
As these technologies continue to evolve, it becomes increasingly crucial to develop robust defenses and regulatory frameworks to mitigate their malicious exploitation.