Can ChatGPT Be Sued for Defamation?

OpenAI Accuses New York Times of 'Manipulating' ChatGPT in Copyright Case

An Australian mayor has threatened to sue ChatGPT for defamation after the artificial intelligence (AI) tool falsely claimed he served a prison term for bribery.

The mayor of Hepburn Shire in the state of Victoria first got wind of the claim when people told him ChatGPT named him the guilty party in a bribery scandal involving a subsidiary of the Reserve Bank of Australia.

However, Brian Hood was actually the whistleblower in the case, which happened in the early 2000s: he worked for the subsidiary Note Printing Australia and informed authorities about bribes to foreign officials to win a contract for printing money.

Hood’s lawyers noted he was not charged with a criminal act. The mayor, through his legal team, has sent a letter to OpenAI giving the company 28 days to fix the issue or face legal action.

The letter has raised questions about whether a defamation lawsuit can be instituted against the AI tool.

Legal views

Given that law varies across jurisdictions, there is no consensus on what could happen. But everyone agrees it would be a landmark case.

A partner at the law firm representing Hood, James Naughton, said:

“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space.”

In the conversation with Reuters, he added that Hood is an elected official, which means his reputation is central to his job as most people will get to know about him from public information.

“So it makes a difference to him if people in his community are accessing this material.”

Top American jurists had commented earlier on the possibility of suing AI for libel when ChatGPT implicated an editorial cartoonist of involvement in a feud with a rival cartoonist.

According to Harvard Law School’s Laurence Tribe, it is possible to sue the AI tool for libel as the law does not consider personhood when creating legal liability for libel.

But RonNell Andersen Jones from the University of Utah disagreed with this view. Jones said:

“It is harder to conceptualize this within our defamation-law framework, which presupposes an entity with a state of mind on the other end of the communication.”

Jones added that bringing an action under the product liability model might be better than defamation. Proving a defamation action might require showing that the defendant knows it is a lie and told it with actual malice or disregard for whether it was true.

Yale Law School Professor Robert Post also noted that generating false information might not be the problem, but whether users of ChatGPT publicized it. This might be hard to prove for the claimant.

Possible defense

OpenAI has yet to make any comment on the issue. But the AI developer might simply rely on its terms of use which clearly state that there might be incorrect output from the tool.

Others believe that Section 230 of the Communications Decency Act, which protects website operators from liability for several infractions due to third-party content, might be useful here.

But there is also doubt over whether it would apply given the law’s sponsors, Sen. Ron Wyden and former Rep. Chris Cox, believe it shouldn’t. The courts usually consider the intentions of legislators when applying a law.

What does AI think?

Naturally, AI-powered chatbots themselves have an opinion on the subject. A chatbot built on Telegram, Kainene, noted that AI models don’t have personal opinions. If they generate words considered defamatory, it would be because of the data they are trained on.

So, any legal action would be against programmers and trainers, not the AI model itself – at least for now. Kainene added:

“Generally, the responsibility for any wrongdoing by Al systems falls on their creators, developers, or operators rather than the Al systems themselves.”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.