Search

AI Boosting Fraudulent Phishing Emails, Experts Warn

Deepfake CFO Dupes Employee $25.6M in Hong Kong AI Scam 

Chatbots are eliminating a significant barrier against fraudulent phishing emails by correcting obvious grammatical and spelling mistakes, says experts.

Since the viral AI chatbot ChatGPT launched last year, writing human-sounding phishing and fraudulent emails has become much easier, as previously reported by MetaNews.

Also Read: AI-Based Cybersecurity Assistant Unveiled by Microsoft

Such emails are an effective trap for cybercriminals, who use them to deceive recipients into clicking on malicious software or handing over personal details such as passwords, usernames, transaction codes, card details, and so on.

“Every hacker can now use AI that deals with all misspellings and poor grammar,” said Corey Thomas, chief executive of US cybersecurity firm Rapid7.

AI chatbots are rectifying a basic flaw in some phishing attempts, which involves poor spelling and grammar. These chatbots can correct errors that often trip spam filters or alert human readers, thereby making the phishing emails more effective.

Half of the total adult population in Wales and the UK received phishing emails last year, according to the Office for National Statistics (ONS). Those phishing emails were mainly targeted at people aged 25–44.

Bypassing spam filters

Record-breaker ChatGPT reached 100 million users just two months after its launch. Although it’s been put to work in a positive way by many, OpenAI’s ubiquitous tool has certainly caught the eye of bad actors, including those who conduct phishing campaigns.

“The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works,” said Thomas.

According to Darktrace, a UK-based cybersecurity firm, phishing emails are increasingly being written by bots, allowing cybercriminals to overcome language barriers and send longer messages that are harder to detect by spam filters.

While the use of large language models (LLM) like ChatGPT has reduced the overall volume of malicious emails detected by Darktrace’s monitoring system, it has also led to an increase in the linguistic complexity of those emails.

Max Heinemeyer, the company’s chief product officer, believes scammers are using advanced language models like ChatGPT to draft longer and more complex phishing emails.

“Even if somebody said, ‘don’t worry about ChatGPT, it’s going to be commercialised’, well, the genie is out of the bottle,” said Heinemeyer.

Affecting social engineering

The use of advanced language models for social engineering is having an immediate impact on the threat landscape, according to the CPO.

With the help of AI, cybercriminals can easily craft highly convincing “spear-phishing” emails and other written communications that are harder to detect.

“AI allows you to craft very believable ‘spear-phishing’ emails and other written communication with very little effort, especially compared to what you have to do before,” stated Heinemeyer.

He explains that by using LLMs, such as ChatGPT, cybercriminals can easily create tailored emails by crawling social media profiles and inputting data into the model.

This allows scammers to generate convincing emails that are almost indistinguishable from those written by humans – even if they lack fluency in the English language.

Europol has recently issued an international advisory regarding the potential criminal use of large language models such as ChatGPT.

“As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” stated Europol.

Fraud and social engineering, disinformation, and cybercrime are the main three areas among the many areas of concern identified by Europol’s experts.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×