Scams on the internet might get a lot more dangerous now, thanks to fraudsters having unobstructed access to ChatGPT, Techradar reports.
The widely popular AI-powered chatbot, ChatGPT, is continuously to make headlines. With its ability to write everything from debugging iframe code to complex computer programming code, ChatGPT has established AI as the year’s tech buzzword.
Despite its huge popularity and engagement, ChatGPT has occasioned concerns about ethics and regulation.
Also read: ChatGPT Users ‘Jailbreak’ AI, Unleash Dan Alter Ego
A recent report published by cybersecurity researchers at Norton Labs laid out three key ways threat actors could abuse ChatGPT. The report indicated ChatGPT could be abused to make internet scams in a more effective way: through deep-fake content generation, phishing creation, and malware creation.
“Norton Labs is anticipating scammers are also eyeing the capabilities of large language models and testing ways to enhance their cybercrimes to make them more realistic and believable,” stated the report.
The tool’s capacity to produce “high-quality misinformation or disinformation on a large scale” could aid bot farms in intensifying discord more effectively. This could enable malicious actors to effortlessly “instil doubt and manipulate narratives in multiple languages,” according to Norton.
Highly convincing ‘misinformation’
Writing business plans, strategies, and company descriptions in a convincing way is child’s play for ChatGPT. However, this potential also heightens the risks of misinformation, which may turn into a scam.
“Not only is the content generated by ChatGPT sometimes unintentionally incorrect, but a bad actor can also use these tools to intentionally create content used to harm people in some way,” stated the report.
Its ability to generate “high-quality misinformation or disinformation at scale could lead to mistrust and shape narratives in different languages.”
Writing reviews of products has become increasingly easy with ChatGPT, which cannot be tracked because it generates individual responses with the same provided information each time. In spite of its ability, it poses the challenge of “spotting fake reviews and shoddy products.”
Worryingly, the tool might also be used for bullying.
“Using these tools in harassment campaigns on social media to silence or bully people is also a possible outcome that would have a chilling effect on speech,” the report notes.
ChatGPT in phishing campaigns
ChatGPT is particularly good at generating human-sounding text in different languages, with readers left none the wiser as to whether the text was produced by AI or human. Even OpenAI, the developer of ChatGPT, is not able to identify if a text was written by AI, stating that “it is impossible to reliably detect all AI-written text.”
The prospect of ChatGPT being used in phishing campaigns is a real one.
“Malicious actors can use ChatGPT to craft phishing emails or social media posts that appear to be from legitimate sources, making it more difficult to detect and defend against these types of threats,” stated the report.
As its popularity increases, a probable corollary is an increase in the number of “phishing campaigns and their sophistication.”
The report suggested that “malicious actors can feed ChatGPT with real life examples of non-malicious messages from the companies they want to impersonate and order the AI to create new ones based on the same style with malicious intent.”
Such campaigns could prove highly successful in deceiving individuals into disclosing personal information or sending money to criminal entities. Norton Labs advised consumers to be cautious when “clicking on links or providing personal information.”
ChatGPT can create malware
Generating code and adapting different programming languages is simply part and parcel of ChatGPT’s services. So it’s little wonder fraudsters are using it to generate malware.
“With the right prompt, novice malware authors can describe what they want to do and get working code snippets,” according to the report. This poses a serious threat of malware attacks sufficiently advanced to wreak havoc.
“One example is to generate code to detect when a bitcoin wallet address is copied to the clipboard so that it can be replaced with a malicious address controlled by the malware author,” explained the report.
As a result, the availability of such a chatbot will cause an increase in the sophistication of malware.