AI Speech Protection Questioned by Legal Authorities

AI Speech Protection Questioned by Legal Authorities

Legal expert Peter Salib highlights the need for stringent regulation of AI-generated content, questioning its protection under the First Amendment amid rising concerns over the potential risks.

The First Amendment’s protection of content generated by artificial intelligence (AI) is coming under scrutiny from legal experts in light of the technology’s rapid evolution and increasing capabilities.

Also read: Creepy Exposé as an AI Model Seems to Know When Humans Test It

Assistant professor of law Peter Salib of the University of Houston Law Centre contends that appropriate regulation of AI is necessary to avert potentially disastrous outcomes.

The crux of the matter

The heart of the debate is whether AI outputs, especially those generated by large language models (LLMs), should be considered protected speech under the First Amendment. Some believe that these outputs should be given the same protection as human speech since they are undeniably speech-like and expressive.

AI Speech Protection Questioned by Legal Expert Peter Salib

However, Salib cautions that considering AI outputs as protected speech would make it difficult to regulate these systems effectively. He draws attention to the growing dangers associated with artificial intelligence. Large language models can create new chemical weapons, aid non-programmers in hacking critical infrastructure, and engage in complex games of manipulation.

Additionally, their potential risks to human life, limbs, and freedom are significant. According to Salib’s research, threats posed by near-future generative AI systems include bioterrorism, the manufacture of pandemic viruses, and even fully automated drone-based political assassinations.

Regulating AI speech outputs

Salib argues that although AI outputs might seem expressive and talkative, they are not human speech. AI software is made to say anything, in contrast to software written by individuals with specific ideas in mind. Open-ended questions allow users to elicit information from models that they may not have known or considered. Because of this distinction, AI speech outputs are not entitled to the highest degree of constitutional protection because they differ from human speech.

However, he suggests that regulations should concentrate on the outputs of AI rather than trying to prevent systems from generating bad outputs in the first place. Rules should dictate what AI is permitted to say, as it is currently impossible to write legal rules mandating safe code for AI systems. The law may mandate that models remain unreleased or even be destroyed, depending on the level of danger posed by the outputs. With this approach, AI firms will be encouraged to invest in safety research and stringent protocols.

Additionally, Salib has been invited to speak on why AI outputs are not protected speech.

According to the article, AI-generative systems are an advanced technology and have enormous potential for all kinds of human endeavors. It might speed up economic growth, lead to the discovery of new things, treat illnesses, and even help billions of people escape poverty. However, as with all new technology, power comes with risk as well as reward.

The article stated that a variety of AI disasters are imminent, though they are preventable. However, only if governments are successful in putting sensible safety regulations in place and scientists are successful in producing the innovations required to put those regulations into effect. Additionally, the article stated that the First Amendment will pose a serious threat to such innovations.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.