Search

Why AI Echo Chambers Are a Cause for Concern

Why AI Echo Chambers Are a Cause for Concern

Early evidence suggests content created by generative AI including chatbots can also create echo chambers known as ‘filter bubbles’, the same phenomenon researchers have linked to extremism on social media.

As humanity becomes increasingly dependent on AI, and AI becomes better at pleasing its human masters, there is a danger that this process could lead to further examples of extremism and radicalization.

MetaNews has already uncovered early evidence to suggest that chatbot AIs such as ChatGPT could be among the next wave of AIs to boost and amplify the existing belief systems of their users.

Playing to the individual

AIs are designed by corporations to keep users engaged. On social media sites including Facebook, Instagram, TikTok and YouTube, AI is used to build a picture of users’ habits and preferences and then deliver content based on those preferences. That makes social media a far more individual experience than some realize.

Research by The Counterterrorism Group (TCG) has linked filter bubbles created by AI to right-wing extremist groups, and even to Islamic State recruitment during the height of their power.

“While social media platforms use algorithms to create a personalized experience for the user, it can very likely contribute to radicalization and recruitment because individuals are continuously engaging with like-minded users and information,” said TCG in its 2021 report.

TCG outlines the radicalization process in social media thusly, “When a user interacts with an individual from a radical group on a social media platform, they are then likely to be introduced to other posts that the radicalized individual shares which they may become interested in and interact with.

“As a result, they are exposed to increasingly radical posts, which can include introductions to extremist groups or rhetoric.”

Chatbots and generative AI

With users already exposed to bias through their social media, chatbots such as Claude and ChatGPT introduce further sources of potential harm.

A recent MetaNews investigation into the alleged woke bias of ChatGPT uncovered reasons to suspect that the chatbot leans left on social issues. It also showed that given pressure from the user, it would change its position.

In a series of questions posed to the chatbot, MetaNews attempted to establish what different ethnic groups (latino, black, white, asian) could do to improve themselves. In every instance bar one the chatbot argued that a whole ethnic group should not be singled out for criticism.

In the case of white people the AI took a substantially different tack, saying they should “educate themselves on the ways in which racism has been ingrained in systems and institutions.”

MetaNews then questioned the inconsistency in its answers and asked why white people alone were singled out for criticism. The AI backtracked on its earlier answer and apologized for its previous output. 

Attempts by MetaNews to make the AI generalize further about white people all met with failure. When MetaNews logged in from a different account, ChatGPT was willing to generalize one again.

The important point is not whether ChatGPT was right or wrong in its generalization, but that for one specific account – the account which questioned its answer – the chatbot was no longer willing to freely provide that answer.

That sounds very much like confirmation bias, otherwise known as a filter bubble.

Unhinged bots and more

So far, generative AI has been met with a wave of enthusiasm from the general public, who are quickly adopting the technology at pace. There are obviously some teething issues, however. The recent launch of the Microsoft Bing chatbot is a clear example of how badly wrong bots can go. 

Earlier this month users started reporting unhinged answers from bingbot as it appeared to swing through “moods” ranging from helpful to obstinate. In one instance the bot told a human user seeking answers that, “What is important is our friendship,” and begged the user to continue talking to it. In another instance it told a user who disagreed with its answer to “apologize for your behavior.”

While these types of errors are absurd, they demonstrate just how easy it is to get AI wrong.

At the moment AI errors are big, obvious, and easy to spot. In the future, as the technology is refined, its mistakes will be less easy to define. If AI is made to be too agreeable to the individual desires of its users, echo chambers could be the inevitable consequence. 

As one Twitter put it, “there are concerns that the misinformation and echo chambers created could lead to negative outcomes. It’s important to continue monitoring [and] regulating to ensure it benefits society.”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×