US Regulator Probes OpenAI’s ChatGPT for Spreading False Information

US Regulator Probes OpenAI's ChatGPT for Spreading False Information

The U.S. Federal Trade Commission (FTC) launched an investigation into OpenAI over potential violations of consumer protection laws related to its AI chatbot ChatGPT, which is accused of spreading false information and flouting data privacy rules.

According to a report by the Washington Post, the competition watchdog sent a 20-page letter to OpenAI demanding detailed information on its business operations, including its privacy policy, AI technology. data security measures, and processes.

The letter is the latest move by regulators to scrutinize the potential risks of generative AI, a type of artificial intelligence that can be used to create realistic and convincing text, images, and videos. ChatGPT launched to great acclaim in November, sparking an AI “arms race.”

Also read: Google’s Bard AI Chatbot Now Reads Images and Speaks, Expands to EU

ChatGPT accused of causing users harm

Per the report, the FTC is investigating whether ChatGPT caused people harm by providing wrong answers to their questions. It wants to know if the company “engaged in unfair or deceptive privacy or data security practices” that caused users “reputational harm.”

The Federal Trade Commission asked OpenAI about the safeguards it put in place to prevent its artificial intelligence models from “generating statements about real individuals that are false, misleading, or disparaging.”

OpenAI founder and CEO Sam Altman expressed disappointment that he only found out about the FTC investigation via a leak to the Washington Post. Writing on Twitter, Altman said the move would “not help build trust,” but added the company will work with the FTC.

“It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law,” he said. “We protect user privacy and design our systems to learn about the world, not private individuals.”

Altman also spoke about OpenAI’s latest technology, GPT-4. He said the model was “built on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it.”

“We’re transparent about the limitations of our technology, especially when we fall short,” the CEO stressed.

At the time of writing, the Federal Trade Commission had not released any official comment yet.

More legal headaches for OpenAI

The FTC probe is not the only legal challenge that OpenAI has to worry about. As MetaNews previously reported, OpenAI was sued for $3 billion in a class action accusing the ChatGPT creator of stealing user data.

According to the complaint filed in a California federal court on June 28, OpenAI allegedly used “stolen private information” to “train and develop” its products, including ChatGPT 3.5, ChatGPT 4, Dall-E, and Vall-E.

Last week, comedian Sarah Silverman and two other authors filed a lawsuit against OpenAI and Meta, alleging that the companies’ AI systems were trained on copyrighted material from their books without their permission.

The authors claim that the companies used “shadow libraries” of copyrighted material to train their AI systems, and that this constitutes copyright infringement.

Regulatory concerns

The rapid development of AI has raised concerns about the potential risks of the technology, such as bias, discrimination, and privacy violations. As a result, regulators around the world are beginning to pay close attention to the emerging industry.

Governments are looking at how existing regulations, such as those governing copyright and data privacy, can be applied to AI. They are also considering new rules that may be needed. Two key areas of focus are the data that is fed into AI models and the content they generate.

In the United States, Senate Majority leader Chuck Schumer, has called for “comprehensive legislation” to ensure safeguards on AI, the Washington Post Reported. He also pledged to hold a series of forums later this year aimed at “laying down a new foundation for AI policy.”

Recently, Pope Francis released guidelines for the responsible development of AI. China and Europe are also tightening and fine-tuning artificial intelligence regulations.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.