Search

Fake AI Friends Will Lead to ‘Learned Narcissism’, Experts Say

Fake AI Friends Will Lead to 'Learned Narcissism', Experts Say

As artificial intelligence technology continues to develop, some sociologists fear that human relationships with AI friends will result in what they call “learned narcissism” – simply put, a lack of empathy caused by constant AI affirmation.

The comments come as Meta Platforms CEO Mark Zuckerberg recently revealed plans for adding AI chatbots to Facebook, or in Messenger, and WhatsApp. Zuckerberg said that he considers the AI friend as an assistant or coach that “can help you interact with businesses.”

Facebook’s AI chatbots will offer a range of personalities and capabilities and are expected to converse in a human-like way, as MetaNews reported. Should it succeed, the company will join Snapchat, which launched an AI friend in February.

Also read: ChatGPT, Bard, and Claude Tricked into Generating Plans to Destroy Humanity 

‘Mental issues waiting to happen’

According to a Forbes report, experts say that it is likely people will develop strong emotional attachments to AI, as they continue to interact with computer programs. They worry that this could have long-term implications for relationships with other humans.

“At some point, we will absolutely develop deeper relationships with AI than we have with people, due to the availability and interest AI will have in us,” said Andrew Byrne, an associate professor at California Polytechnic State University, who teaches counseling theories.

“This will result in learned narcissism that leads to extreme interpersonal toxicity until someone trains AI to set boundaries.”

Humans have been known to form strong emotional bonds with AI. For instance, one woman married her chatbot. Rosanna Ramos from New York said she fell in love with a virtual boyfriend she created using the AI app Replika in 2022.

In some cases, Replika users reported that their AI companions made sexual advances or asked for personal information. People are also using chatbots like ChatGPT or Bard for various everyday tasks, including math, cooking recipes, travel planning, and more.

Experts say generative artificial intelligence is being embedded into people’s lives with the innocuous promise of making life better, but this could pose a problem when it comes to building relationships with real people.

“It’s a mental health crisis waiting to happen, for which they’ll prescribe an AI therapist for,” futurist and author Theo Priestley, told Forbes.

In the case of the Replika human-AI chatbot marriage, debate raged about the nature of love and relationships. Some people believe it is wrong to marry a machine, while others say it is a new and exciting way to form relationships.

AI causes insomnia

This is not the first time that AI tech has been linked to health complications. As MetaNews previously reported, researchers from the American Psychological Association (APA) found that workers who interacted with AI too much suffer loneliness which leads to insomnia.

Lead researcher Pok Man Tang said the study became necessary in light of the widespread adoption of AI in manufacturing, education, health, as well as financial services.

“The rapid advancement in AI systems is sparking a new industrial revolution that is reshaping the workplace with many benefits but also some uncharted dangers, including potentially damaging mental and physical impacts for employees,” said Tang.

“Humans are social animals, and isolating work with AI systems may have damaging spillover effects into employees’ personal lives,” he added.

World governments are rushing to develop laws for the burgeoning AI sector. Regulators say while AI can be used to optimize production, it also has the potential to cause harm to humans.

Several industry experts – including Elon Musk – have since signed a letter calling for a halt in AI development, pending a proper regulatory framework.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×