Legal expert Peter Salib highlights the need for stringent regulation of AI-generated content, questioning its protection under the First Amendment amid rising concerns over the potential risks.
The First Amendment’s protection of content generated by artificial intelligence (AI) is coming under scrutiny from legal experts in light of the technology’s rapid evolution and increasing capabilities.
Also read: Creepy Exposé as an AI Model Seems to Know When Humans Test It
Assistant professor of law Peter Salib of the University of Houston Law Centre contends that appropriate regulation of AI is necessary to avert potentially disastrous outcomes.
AI Outputs Are Not Protected Speech @SSRN https://t.co/7BZO7qjXnm
— New Media Beat (@OWWRGMN) February 15, 2024
The crux of the matter
The heart of the debate is whether AI outputs, especially those generated by large language models (LLMs), should be considered protected speech under the First Amendment. Some believe that these outputs should be given the same protection as human speech since they are undeniably speech-like and expressive.
However, Salib cautions that considering AI outputs as protected speech would make it difficult to regulate these systems effectively. He draws attention to the growing dangers associated with artificial intelligence. Large language models can create new chemical weapons, aid non-programmers in hacking critical infrastructure, and engage in complex games of manipulation.
Additionally, their potential risks to human life, limbs, and freedom are significant. According to Salib’s research, threats posed by near-future generative AI systems include bioterrorism, the manufacture of pandemic viruses, and even fully automated drone-based political assassinations.
Pleased to share that my newest article, "AI Outputs Are Not Protected Speech," is forthcoming in @WashULRev.
The article has, I think, important implications for impending federal and state laws designed to reduce catastrophic risk from AI. https://t.co/V3kih0HX9k pic.twitter.com/bzQU5mga5g
— Peter N. Salib (@petersalib) February 13, 2024
Regulating AI speech outputs
Salib argues that although AI outputs might seem expressive and talkative, they are not human speech. AI software is made to say anything, in contrast to software written by individuals with specific ideas in mind. Open-ended questions allow users to elicit information from models that they may not have known or considered. Because of this distinction, AI speech outputs are not entitled to the highest degree of constitutional protection because they differ from human speech.
However, he suggests that regulations should concentrate on the outputs of AI rather than trying to prevent systems from generating bad outputs in the first place. Rules should dictate what AI is permitted to say, as it is currently impossible to write legal rules mandating safe code for AI systems. The law may mandate that models remain unreleased or even be destroyed, depending on the level of danger posed by the outputs. With this approach, AI firms will be encouraged to invest in safety research and stringent protocols.
Additionally, Salib has been invited to speak on why AI outputs are not protected speech.
💡Join us on Mar 13 (Wed) to hear Dr Peter Salib @petersalib explain why as a matter of First Amendment law, free speech theory, and computer-scientific fact, AI outputs are not protected speech.
🔗Register: https://t.co/f1YMCrDDqt pic.twitter.com/A0jOfN5Au8— Philip K.H. Wong Centre for Chinese Law (@CCLHKU) March 6, 2024
According to the article, AI-generative systems are an advanced technology and have enormous potential for all kinds of human endeavors. It might speed up economic growth, lead to the discovery of new things, treat illnesses, and even help billions of people escape poverty. However, as with all new technology, power comes with risk as well as reward.
The article stated that a variety of AI disasters are imminent, though they are preventable. However, only if governments are successful in putting sensible safety regulations in place and scientists are successful in producing the innovations required to put those regulations into effect. Additionally, the article stated that the First Amendment will pose a serious threat to such innovations.