Search

Meta Challenged as Holly Elmore Leads Protest on Open-Source Safety

Meta Challenged as Holly Elmore Leads Protest on Open-Source Safety

In a bold move against Meta’s recent stance on open-source artificial intelligence (AI), Holly Elmore, an AI safety advocate, spearheaded a demonstration outside Meta’s San Francisco base.

The heart of the dispute lies in the potential dangers of freely releasing powerful AI model weights, which Elmore and her supporters argue may lead to “irreversible proliferation” risks.

Meta stirred the tech waters earlier this year by releasing the weights of its LLaMA family of models to AI researchers, a decision that significantly diverges from the policies of other tech giants. These weights, vital to the model’s function, can be manipulated and modified once in the hands of experts, raising concerns over misuse. Indeed, it didn’t take long for these weights to be leaked online, intensifying criticism of Meta’s open-source approach.

The risks of releasing weights

 

Elmore, once associated with the think tank Rethink Priorities, voiced her concerns:

“Releasing weights is a dangerous policy because models can be modified by anyone and cannot be recalled.”

She emphasized the rising threat as models become more advanced and potent. When model weights are openly available, they can potentially be used for malicious activities, ranging from constructing phishing schemes to planning cyberattacks. Once the weights are in hand, Elmore cautions, there’s an unsettling ease to bypassing safety guardrails.

Peter S. Park from MIT echoes these sentiments. He warns of the perils if the current open-source approach is carried over to more advanced AI, which could operate with higher autonomy, thus enabling misuse on a larger and more harmful scale.

A counterargument: The open-source defense

Stella Biderman of EleutherAI, however, offers a different perspective. Biderman questions the immediate assumption that releasing model weights directly correlates with potential misuse. She believes the term “proliferation” associated with weapons of mass destruction is misleading in the AI context. Arguing that the basic elements for building large language models are already out in the open, Biderman believes secrecy might not be the panacea for potential misuse.

She adds that urging companies to maintain secrecy around their models could have “serious downstream consequences for transparency, public awareness, and science.” Such a policy might inadvertently harm independent researchers and enthusiasts more than it safeguards against potential threats.

Navigating open-source waters in AI

The term ‘open-source‘ in the AI realm is muddled. As Stefano Maffulli from the Open Source Initiative (OSI) points out, varying organizations have co-opted the term, leading to an ambiguous understanding of what it truly signifies in AI. For software to be truly open-source, all its components, from source code to training data, must be publicly accessible and reusable.

The future of open-source AI is still being shaped, with OSI actively trying to define its parameters. Maffulli remains adamant about the importance of an open approach, emphasizing that AI can only be “trustworthy, responsible, and accountable” if it aligns with open-source principles.

While the debate on open-source AI is far from settled, one thing is clear: the technology’s rapid advancement demands a careful balance between accessibility and security. As companies like Meta forge ahead with their policies, the tech community and the public will continue grappling with open-source AI’s challenges and implications.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×