Connect with us

AI January 9, 2023

ChatGPT Improves Mental Health Outcomes, But There’s a Catch

Published

on

Robot therapists

Mental health organization Koko successfully leveraged ChatGPT to help them improve their online support, but when users learned that a chatbot was co-authoring responses the perceived benefits vanished.

The reception of the experiment on social media has been mixed, with some people fascinated by the potential of AI in the field, and other strongly opposed to the intrusion of artificial intelligence.

No shortcuts to mental wellbeing

A recent experiment by Koko which supplemented human responses with ChatGPT at first appeared to offer significant benefits to its users.

Koko is a non-profit mental health organization offering support and intervention to anyone struggling with mentall wellness, but particularly adolescents. As part of their mission to make mental health accessible to anyone the company augmented its volunteer resources with artificial intelligence.

According to Koko founder Rob Morris the trial initially proved succesful. Morris states that AI assisted reponses were initially better received and helped to cut wait times. Sadly, the benefits did not last.

“Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty,” Morris said on Twitter last week. “Machines don’t have lived, human experience so when they say “that sounds hard” or “I understand”, it sounds inauthentic.”

Morris eventually went on to conclude that chatbots, “aren’t taking time out of their day to think about you. A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.”

The integration of ChatGPT into Koko is part of a longstanding partnership dating back to June 2020, with ChatGPT assisting human assistants to answer over 30,000 messages.

A mixed response

The idea of AI-assisted mental health intervention has been met with widespread fascination, and a small, but palpable degree of horror on social media.

A number of people took umbrage with Morris’ statement that an AI “sounds inauthentic” when sympathizing with humans.

As Twitter user ActuallyStryder pointed out, “It doesn’t ‘sound’ inauthentic, it IS inauthentic. The entirety of the conversation is one of optimized mimicry.”

The same, or similar sentiments were widely shared by many. Others expressed shock or disgust and questioned the ethics of such an experiment in the first place.

Not everyone was quite so appalled however. HyaenaMom compared the authenticity favorably to mental health provision under the UK’s National Health Service (NHS), “It felt inauthentic when coming from my NHS mental health practitioners as well… I’ll take GPT over that anytime.”

As for the fact that chatbots don’t have lived experience she added, “therapists and especially psychiatrists and psychologists might not have lived experience either.”

Craving any company at all

The immediate response to Koko’s experiment seems to suggest that humanity is quite ready for artificial empathy, but could that change over time?

As Morris himself asks, “Can machines overcome this [problem]? Probably.”

Morris continues, “Maybe we’re so desperate to be heard, to have something actually pay attention to us without being distracted, without looking at a phone or checking slack or email or twitter  — maybe we long for that so deeply, we’ll convince ourselves that the machines actually care about us.”

Clarifications cause further confusion

Morris has responded to the backlash from Koku’s recent experiment, seeking to quell the strong criticism the organization has faced from some quarters.

Sadly, the clarification only seems to cause further confusion.

Morris now states that all users were aware that the responses were crafted with the help of an artificial intelligence, and that no-one seeking help ever chatted directly with an AI.

“This feature was opt-in,” said Morris before muddying the waters once more by adding. “Everyone knew about the feature when it was live for a few days.”

Whether users knew about the feature from the start or not, the fallout from the experiment proves that there are tangible pitfalls to avoid when employing AI technology into existing services.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney.

AI

Judge Used ChatGPT to Make a Court Decision

Published

on

Judge Used ChatGPT to Make a Court Decision

ChatGPT, an artificially intelligent chatbot from OpenAI, recently set a world record by reaching 100 million users in just two months after its November launch. OpenAI’s brainchild shook the world immediately after its launch, reaching 1 million users in five days and never looking back.

Also Read: Can AI Be Our Lawyer? ‘Robot Lawyer’ to Test That in US Court

To date, this chatbot has faced bans, restrictions, and countless appreciations and praises, as well as continuously expanding its sector to be used. Recently, a Colombian judge used ChatGPT to make a court ruling, Vice reported.

Judge Juan Manuel Padilla Garcia, of the First Circuit Court in Cartagena, has recently made a ground-breaking decision by incorporating the use of the AI tool ChatGPT in his ruling. According to a court document dated January 30, 2023, he used the AI to ask legal questions related to the case and used its responses to inform his decision. This is a historic moment, as this is the first time a judge has used the power of AI to make a ruling.

“The arguments for this decision will be determined in line with the use of artificial intelligence (AI). Accordingly, we entered parts of the legal questions posed in these proceedings.” Garcia wrote in the decision, translated from Spanish.

That the purpose of including AI-produced texts in his decision was not to replace his judgement, but rather to optimize the time spent drafting judgments by corroborating the information provided by AI, Judge Garcia explained.

Judge Included Full Response in Decision

The case was about a dispute between a health insurance company and an autistic child, concerning whether the child should be granted coverage for medical treatment.

“Is an autistic minor exonerated from paying fees for their therapies?” and “Has the jurisprudence of the constitutional court made favorable decisions in similar cases?” were the legal questions asked to ChatGPT, according to the decision. And the judge Garcia included the entire ChatGPT response, as well as his own insights into relevant legal precedents.

The judge used AI to “extend the arguments of the adopted decision”. He detailed the exchanges with the AI, and then adopted its responses and his own legal arguments as grounds for the decision.

AI Judge before AI Lawyer?

This is not the first case of AI in court as an AI-powered “robot lawyer” will defend its first client in a US traffic court this month as artificial intelligence continues to disrupt various industries.

Joshua Browder, founder and CEO of consumer-empowerment start-up DoNotPay, is sending a defendant into court armed with a recording device and earpieces. The company will feed audio of the court arguments into an AI that will in turn provide responses to the defendant.

In this way, the use and experimentation of AI chatbots, particularly ChatGPT, are widely expanding. Because of the misinformation and errors that occurred, the CNET tech media outlet previously allowed and banned the ChatGPT written content.

Similarly, the popular publisher, Medium has decided to allow the articles generated by this tool, but they “should be declared as AI generated.” Many educational institutions have banned ChatGPT after discovering that it has become the best cheating partner for college students’ schoolwork.

 

 

Continue Reading

AI

Unlike Bitcoin, ChatGPT Creator Mira Murati Wants AI Regulated

Published

on

Unlike Bitcoin, ChatGPT Creator Mira Murati Wants AI Regulated
Mira Murati, OpenAI.

OpenAI chief technology officer Mira Murati has called for regulation of the company’s wildly popular chatbot ChatGPT and other generative AI tools to prevent companies from deploying the tech irresponsibly.

Murati’s comments contrast sharply with those of early Bitcoin visionaries who emphasized freedom, autonomy and democracy. But some new movers have welcomed regulation as a solution to the trust problems that have affected the cryptocurrency industry.

Also read: Google’s AI Bard to rival ChatGPT

“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” Murati said in a recent interview with Time Magazine.

“But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies – definitely regulators and governments and everyone else,” she added.

Growing AI impact

ChatGPT is an AI-powered tool that generates human-like responses on almost every topic. It can write complex essays, poetry, code and even pass MBA exams at the Wharton School of Business.

The software has set the Internet on fire since OpenAI launched the chatbot in November 2022. In January, ChatGPT reached 100 million active users, making it the fastest growing Web application in history.

When Mira Murati was asked whether it was too early for policymakers and regulators to get involved, over fears that government involvement could slow innovation, she said:

“It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”

Murati also spoke about the dangers of artificial intelligence (AI) and how OpenAI was responding to those threats.

“[AI] can be misused, or it can be used by bad actors. So, then there are questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that’s aligned with human values?” she stated.

“This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out… And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.”

ChatGPT surpasses Bitcoin in Google search

Searches for ChatGPT on Google have risen sharply over the last few months as interest in artificial intelligence continues to gather pace.

According to Google Trends, ChatGPT surpassed Bitcoin among popular search terms for most of January. The global Google searches for the word ‘ChatGPT’ peaked at a popularity score of 100 as of Feb. 2, compared to 45 for Bitcoin.

This is significant. At the beginning of November, when ChatGPT launched, the tool had a score of under 1. At the time, Bitcoin’s popularity score was 94. Search terms such as “crypto” and “Bitcoin” have declined in recent months, mainly due to the current bear market.

As shown in the graph below, searches for ChatGPT are more than double those of Bitcoin, even when the BTC price surged to more than $23,800 in recent days.

Unlike Bitcoin, ChatGPT Creator Mira Murati Wants AI Regulated

However, Murati’s calls for AI regulation are in stark contrast with the foundational principles of Bitcoin of privacy and autonomy. Bitcoin was conceived as an anti-authority invention where unmediated business is done peer-to-peer, away from regulatory oversight.

Has regulation helped curb Bitcoin illicit activity?

While the lack of internal controls, requiring users to utilize their own discretion, has been exploited by those with criminal motives, things are beginning to change. Illicit Bitcoin activity dropped sharply over the past decade and now accounts for under 1% of BTC usage.

Crypto analyst PlanB posted a chart on Twitter showing how crime-related use of Bitcoin fell between 2012 and 2020. “Less than 1% of Bitcoin usage is associated with ‘crime,’” said the pseudonymous Dutch investor, inventor of the stock-to-flow (S2F) model.

The chart from the Chainalysis Crime Report 2022 shows that illicit activity involving Bitcoin peaked at about 7% in 2012. It fell sharply to 1% the following year after the shutdown of Ross Ulbricht’s Silk Road darknet marketplace (DNM).

Bitcoin scams virtually ground to a halt in 2017 and 2018 following the closure of AlphaBay DNM. It picked up somewhat in 2019 to the current levels of under 1%. Thanks to the $2.25 billion PlusToken Ponzi scheme.

Observers say the declining illicit Bitcoin activity may be the result of regulation. It is this regulation which helped to bring transparency and credibility to the Bitcoin network. Bitcoin may have resisted regulation earlier on in its life, but governments eventually had their say.

Regulators tightened their grip on cryptocurrency following the multi-billion-dollar collapse of the Terra blockchain in May 2022. More recently, the spectacular collapse of FTX exchange means regulation has become a major focal area for governments all around the world.

Across the world, government agencies are targeting crypto investors not only with taxes but mandatory registration and full disclosure rules. State regulation appears to be the price the crypto community will have to pay for assimilation into the mainstream economy.

For OpenAI CTO Mira Murati, embracing governmental regulation now, rather than later, may be key to preventng future failures that impact negatively on the public.

Continue Reading

AI

CNET’s AI Controversy Deepens

Published

on

AI CNET disaster

After publishing a series of disastrous AI-written articles, CNET and its parent company Red Ventures would have wished to avoid any further controversy – but they have singlehandedly failed in that respect.

Last month, MetaNews reported that from November of last year to January 2023, CNET published a series of financial advice articles written by artificial intelligence (AI). A later audit revealed that the bulk of articles created by CNET’s proprietary AI contained either factual errors or plagiarism or in some cases, both.

It is likely that CNET readers seeking information over that period will have received and taken significantly wrong advice. CNET claims to have learned lessons from the affair, which it posthumously labeled a ‘test’. 

Oops AI did it again

The parent company of CNET is Red Ventures, which also owns numerous other publications including Bankrate and CreditCards.com. Red Ventures monetizes these sites through affiliate advertising programs.

CNET apparently stopped publishing AI content on January 20, but editors at Bankrate failed to stop the practice until the end of the month. On January 31, Bankrate published a staggeringly irresponsible article on 5/1 Adjustable Rate Mortgage (ARM).

Firstly, Bankrate incorrectly stated that a 5/1 ARM is a 30-year mortgage when this is not always the case. This is a comparatively small error compared to what followed. The article went on to argue that the “benefit of a 5/1 ARM is more affordable monthly payments compared with a 30-year fixed mortgage.”

What the AI-written article forgot to mention is that these low rates only apply for the first five years, and that interest rates increase significantly every year after that. It means that while borrowers may get a good deal in the short term, what they end up with is an increasingly bad deal in the longer term.

What the article said next was perhaps even more criminal: “The lower payment allows you to take on a bigger mortgage and get a larger or better-located house.”

By most objective means that is bad financial advice, but the affiliate links pointing to mortgage companies make no mention of the drawbacks.

Staff are angry and confused

The mood at Red Ventures and its various publications appears to be one of anger and confusion.

What artificial intelligence has been doing at CNET and for how long, is a major topic of contention. 

According to sources within the firm a tool called Wordsmith, nicknamed “Morgotron”, has been writing mortgage-related stories at the publication for at least a year and a half. 

At this stage, the extent of AI-written content appears to be the subject of a cover-up, or at least of significant obfuscation. CNET made no mention of morgotron or its long-standing use of AI when they came clean about their ‘test’ on January 25.

One of the biggest issues for staff appears to be successfully checking the work of these machines. One staff member who is currently employed by Red Ventures went on to explain the difficulties in editing AI-written pieces.

“A human freelancer might have a typo here or there, or maybe a misconception about APR versus APY,” the anonymous writer told Futurism last week. “But an article by an AI can be total, authoritative-sounding gibberish. The poor editor in charge of fact-checking whatever the Machine produces isn’t looking for a needle in a haystack; they’re faced with a stack of needles, many of which look remarkably like hay.”

That however may be a moot point. Whatever happens next it seems that CNET and Red Ventures are determined to continue with their experiment. There is, of course, money at stake.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe

Sign up here to get news & updates right to your inbox!

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved