Search

ChatGPT Improves Mental Health Outcomes, But There’s a Catch

Robot therapists

Mental health organization Koko successfully leveraged ChatGPT to help them improve their online support, but when users learned that a chatbot was co-authoring responses the perceived benefits vanished.

The reception of the experiment on social media has been mixed, with some people fascinated by the potential of AI in the field, and other strongly opposed to the intrusion of artificial intelligence.

No shortcuts to mental wellbeing

A recent experiment by Koko which supplemented human responses with ChatGPT at first appeared to offer significant benefits to its users.

Koko is a non-profit mental health organization offering support and intervention to anyone struggling with mentall wellness, but particularly adolescents. As part of their mission to make mental health accessible to anyone the company augmented its volunteer resources with artificial intelligence.

According to Koko founder Rob Morris the trial initially proved succesful. Morris states that AI assisted reponses were initially better received and helped to cut wait times. Sadly, the benefits did not last.

“Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty,” Morris said on Twitter last week. “Machines don’t have lived, human experience so when they say “that sounds hard” or “I understand”, it sounds inauthentic.”

Morris eventually went on to conclude that chatbots, “aren’t taking time out of their day to think about you. A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.”

The integration of ChatGPT into Koko is part of a longstanding partnership dating back to June 2020, with ChatGPT assisting human assistants to answer over 30,000 messages.

A mixed response

The idea of AI-assisted mental health intervention has been met with widespread fascination, and a small, but palpable degree of horror on social media.

A number of people took umbrage with Morris’ statement that an AI “sounds inauthentic” when sympathizing with humans.

As Twitter user ActuallyStryder pointed out, “It doesn’t ‘sound’ inauthentic, it IS inauthentic. The entirety of the conversation is one of optimized mimicry.”

The same, or similar sentiments were widely shared by many. Others expressed shock or disgust and questioned the ethics of such an experiment in the first place.

Not everyone was quite so appalled however. HyaenaMom compared the authenticity favorably to mental health provision under the UK’s National Health Service (NHS), “It felt inauthentic when coming from my NHS mental health practitioners as well… I’ll take GPT over that anytime.”

As for the fact that chatbots don’t have lived experience she added, “therapists and especially psychiatrists and psychologists might not have lived experience either.”

Craving any company at all

The immediate response to Koko’s experiment seems to suggest that humanity is quite ready for artificial empathy, but could that change over time?

As Morris himself asks, “Can machines overcome this [problem]? Probably.”

Morris continues, “Maybe we’re so desperate to be heard, to have something actually pay attention to us without being distracted, without looking at a phone or checking slack or email or twitter  — maybe we long for that so deeply, we’ll convince ourselves that the machines actually care about us.”

Clarifications cause further confusion

Morris has responded to the backlash from Koku’s recent experiment, seeking to quell the strong criticism the organization has faced from some quarters.

Sadly, the clarification only seems to cause further confusion.

Morris now states that all users were aware that the responses were crafted with the help of an artificial intelligence, and that no-one seeking help ever chatted directly with an AI.

“This feature was opt-in,” said Morris before muddying the waters once more by adding. “Everyone knew about the feature when it was live for a few days.”

Whether users knew about the feature from the start or not, the fallout from the experiment proves that there are tangible pitfalls to avoid when employing AI technology into existing services.

RELATED TOPICS: AI, chatbot, ChatGPT, koko, mental health
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×