Criminals Get Sophisticated with ChatGPT Rival – WormGPT

While generative AI has become popular in 2023 with many tools released to help the market in many ways, criminals are also taking advantage of the technology to create malicious systems like WormGPT to perpetuate cybercrimes.

The tool was created for cybercriminals to carry out online crimes. WormGPT poses a threat to innocent online users as the tool is being used to target personal data and increase phishing attacks without being detected.

The creator of the unethical chatbot presented it in March before releasing it in June. WormGPT was presented as a tool for sale in some random hacker forums with the sole purpose of executing cybercrimes.

Wrong intentions

The launch and instant rise of ChatGPT, an artificial intelligence (AI) chatbot by OpenAI spawned an era where chatbots powered by AI built using ChatGPT-like technology or the chatbot’s own models have been released regularly.

Since January, developers have released plenty of generative AI tools that mimic ChatGPT such as AutoGPT, HealthGPT, and ChaosGPT, which prompted OpenAI to seek permission to trademark GPT.

While most of the creations born out of these large language models (LLMs) are trying to solve day-to-day problems and positively impact the personal and professional lives of millions of people across the globe, sophisticated technology like these can also be used to develop harmful tools. One such tool can be found in WormGPT, a generative AI scamming tool.

According to the SlashNext blog, a cybersecurity firm which highlights how the AI tool is used in business email compromise (BEC) attacks, the tool has already gained popularity in dark web forums.

“This tool presents itself as a Blackhat alternative to GPT models, designed specifically for malicious activities,” SlashNext said.

“Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack,”

The SlashNext team have spent time analyzing how the WormGPT tool works. SlashNext believes the tool was developed using the GPTJ language model and supports a wide range of features like chat memory, unlimited character support, and code formatting capabilities.

It has also been speculated that the hacking chatbot is trained on malware-related datasets, although the actual datasets were not specified.

Also read: SEC Targets Coinbase in Lawsuit, Staking Services Suspended


Generative AI bots like ChatGPT and Bard have security features that prohibit them from performing actions that might be deemed illegal. However, users have been trying to use manipulation and a bit of reverse psychology commonly referred to as ‘jailbreaking.’

This was demonstrated by a user who managed to dupe ChatGPT to recite how to make napalm using the now famous ‘Grandma’ gimmick.

But unlike ChatGPT and Bard where you would to be extra creative to work around the safeguards, WormGPT has no safeguards as it is developed to be an ultimate hacking tool.

In an example demonstrated by Kelly, security research at SlashNext, WormGPT was prompted to write a BEC email for an invoice payment request.

According to SlashNext, the results were unsettling.

“WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing it’s potential for sophisticated phishing and BEC attacks,” excerpts of the SlashNext Report showed.

In essence, according to the cybersecurity firm, WormGPT works just like ChatGPT, except it has no boundaries.

The dark side of AI

Generative AI has generated lots of debates over the past eight months due to the fear of the technology replacing human jobs or its potential of causing serious damage to humanity. One such chatbot, ChaosGPT clearly outlined its intentions to destroy humanity and establish world dominance.

SlashNext has hammered on the need for greater responsibility when AI tools are being developed. Yet, WormGPT’s creator published images to show users how they can command the chatbot to create malware and get advice on “how to design dangerous assaults.”

Internet users need to be cautious about the emails they receive or the people they interact with on social media platforms.

While many cybercriminals had to come up with creative content to lure businesses into sending them millions of dollars in fake invoices, WormGPT and others have made it easy for anyone to join a list of individuals who can walk away with a fortune through Business Email Compromise (BEC). 

As a result, the onus falls on companies to invest in teaching their employees safety measures against BEC attacks potentially emanating from AI-powered tools like WormGPT.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.