Search

Experts Warn of AI’s Growing Threat to Journalism in Senate Subcommittee Hearing

Experts Warn of AI's Growing Threat to Journalism in Senate Subcommittee Hearing

The Senate Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism sat at the U.S. Capitol on Jan. 10, 2024, in Washington, DC.

Experts warned of the threat artificial intelligence (AI) poses to journalism at the hearing on Jan. 10. According to the testimony by media executives and academic experts before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, AI is contributing to the big tech-fueled decline of journalism.

Additionally, they spoke of how intellectual property issues are arising as AI models are trained to do the work of a professional journalist. They also raised alarms about the increasing dangers of misinformation powered by AI.

Copyright issues

Generative AI systems that can generate images, text, or any media type must be trained using vast amounts of data. Major AI developer OpenAI partnered with a U.S.-based nonprofit news agency, the Associated Press, to have access to high-quality text data. They accessed part of the AP’s archive to use OpenAI’s products.

Similarly, OpenAI partners with a multinational media company, Axel Springer. This is a part of which ChatGPT will give a summary of articles by news outlets owned by Axel Springer. ChatGPT will also provide links and attributions. However, not all news outlets have similar deals. The New York Times took OpenAI and Microsoft, its major investor, to court.

According to the lawsuit, the New York Times argues that OpenAI models were trained on their materials. Also, they said that OpenAI models were offering a competing product, and this has caused billions of dollars in statutory and actual damages. Consequently, on Jan. 8, OpenAI responded to the lawsuit with a blog post. The blog post contested the Times’ legal claims and noted its various actions to support a health news ecosystem.

Significantly, among the copyright cases launched against AI developers, the New York Times lawsuit is the highest-profile case. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey took OpenAI and Meta to court in July 2023. They were sued for training their AI models on their writing without permission.

Similarly, artists Kelly McKernan, Sarah Andersen, and Karla Orti sued Midjourney, Stability AI, and DeviantArt in January 2023. These companies developed image-generating AI models and were sued for training AI models in their work. However, U.S. District Judge William Orrick dismissed parts of the lawsuit in October, and the plaintiffs amended and resubmitted the lawsuit in November.

Roger Lynch, CEO of Condé Nast, argues that generative AI tools have been built with stolen goods. Condé Nast is a media company that owns several publications, including The New Yorker, Wired, and GQ. At the hearing, he called for ‘congressional intervention’ to ensure AI developers pay publishers for their content.

Moreover, the president and CEO of the trade association, the National Association of Broadcasters, Curtis LeGeyt, said that legislation talks were premature, contending that the current copyright protections should apply. He said they should make the marketplace work as there’s clarity on what the current law applies to generative AI.

Concerns about misinformation

Additionally, LeGeyt warned the Senators about the danger that misinformation generated by AI poses to journalism. He continued that using AI to manipulate, doctor, or misappropriate the likeness of trusted personalities risks spreading misinformation or fraud.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×