Search

‘Scammy’ Bots Flood Social Media, Over a Thousand On Twitter

'Scammy' Bots Flood Social Media, Over a Thousand On Twitter

ChatGPT has spurred the development of AI tools and bots that bad actors are now using to scam unsuspecting users on social media platforms like X, formerly Twitter.

Research by scientists from the University of Louisiana found 1,000 spamming bots being used on X to promote fake cryptocurrencies and NFTs.

OpenAI Rolls Out 70 ChatGPT Plugins as Chatbot Race Heats Up

Convincing text

According to an article by Business Insider, Kai-Cheng Yang, a computational social science researcher, and Filippo Menczer, a computer science professor discovered botnets developed using OpenAI’s ChatGPT, which can script text that sounds genuine and human-like.

A certain network that was discovered by the duo had over 1,000 active bots that are used on the X platform to scam people into investing in fraudulent crypto and NFTs.

The article reveals that the bots post and respond to each other creating a profile that looks convincing and sometimes post selfies stolen from people’s profiles. ChatGPT’s generative AI capabilities have improved how scamming bots create convincing misinformation as compared to previous years when it was easy to spot them due to their terrible grammar, among other reasons.

The rise in social media and AI tools has also made it easier for malicious actors to take advantage of the platforms to monetize their misleading content, according to Menczer.

He said the new AI tools “further lower the cost to generate false but credible content at scale, defeating the already weak moderation defences of social media platforms.”

The evolution of social media and upgrades on ChatGPT has helped scammers not only automate their processes but also improve their human-like characteristics.

Also read: Tornado Founders Arrested for Helping North Korean Hackers Launder $1B

Too much fake information

With the proliferation of generative AI tools, there have also been concerns over the increase in misinformation and deep fakes, which has also kept regulators on their toes trying to tame the technology while promoting innovation.

The University of Louisiana researchers also expressed concerns over rising misinformation on the internet being fueled by advanced AI tools.

Before the generative AI explosion, there were social media bots in existence that spread false information, for instance on COVID-19. But these were easy to detect because of their “robotic behavior and unconvincing fake personas,” which has changed.

“The advancement of AI tools will distort the idea of online information permanently,” Yang told Insider.

While social media applications are mostly the target for botnets and the spread of fake information, there are also websites entirely built on OpenAI’s GPT technology that are being used to spread false news.

According to NewsGuard, a company that rates news and information websites’ credibility, they have found 437 websites since April that are AI-generated and used to spread misinformation. These websites operate with little or no human oversight spanning many languages – English, Arabic, Chinese, Czech, Dutch, French, Indonesian, Italian, Korean, Portuguese, Spanish, Tagalog, Thai, and Turkish.

There is more on the way

According to the EU’s law enforcement agency Europol, 90% of internet content will be generated by AI by 2026 as per their forecasts.

Georgia Institute of Technology computer science professor Wei Xu also told Insider that it will be harder to detect AI-generated content as more unscrupulous individuals take advantage of it to spread misinformation.

“It’s similar to plastic bottled water. We know it is a disaster for the environment, but as long as it is cheap without a big penalty, it will widely exist,” said Xu.

The professor’s comments underscored the need for regulations that deter the proliferation of fake content. The EU has asked online platforms to watermark AI-generated content to allow users to make informed decisions.

In the US, companies including Google, Microsoft, Anthropic and OpenAI were asked to tag content to tell the difference between human-made and AI-made content.

Spotting the spam content

While there is a growing risk of these false information bots on the internet, there is a way to detect when some of these fake information tools are used.

According to Yang and Menczer, ChatGPT often gives responses pointing to its inability to provide certain information due to its internal restrictions and policies.

A common response is “I’m sorry, but I cannot comply with this request.” These responses often make it to the post on the fake pages, according to the researchers, which is how they detected these spam networks.

But as generative AI tools get better at responding as humans, it might be a bit difficult to detect fake botnets this way in the future.

As tech companies and governments rush towards creating regulations and ways to put AI in check, Yang fears that bad players will always ignore the guardrails.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×