Search

New US Government ‘Safety Consortium’ Seeks to ‘Police’ AI

New US Government 'Safety Consortium' Seeks to 'Police' AI

The U.S. government announced a new consortium to support the development and deployment of what it calls “safe and trustworthy” generative AI as authorities push to enforce guardrails for the technology.

Department of Commerce Secretary Gina Raimondo launched the consortium on Thursday. More than 200 AI companies and entities joined the group, including OpenAI, Meta, Google, Amazon, Apple, Microsoft, Anthropic, Nvidia, Intel, IBM, and JP Morgan Chase.

The consortium also includes government agencies, academic institutions like the Massachusetts Institute of Technology and Johns Hopkins University, and other firms such as Mastercard, Visa, BP, and Qualcomm.

Also read:  ‘Rogue’ AI Makes Australian Lawmaker’s Boobs ‘Look Bigger’ On TV 

Setting AI standards

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.

She revealed that the group will be housed under the U.S. AI Safety Institute and will contribute towards achieving the goals outlined in President Joe Biden’s executive order issued in October. The order focused on AI safety, data privacy, standards, and other things.

The consortium’s work will include the creation of “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” the department said.

Raimondo later told a press conference that the job of the consortia “is to ensure that the AI Safety Institute’s research and testing are fully integrated with the broad community.”

She also spoke about the range of participants in the consortium, calling it “the largest collection of test and evaluation teams” and will focus on creating foundations for “new measurement science in AI safety.”

The Commerce Department said in December that it had started taking steps to write key standards and guidance for the safe deployment and testing of AI. The decision came following Biden’s order, which directed agencies to develop standards for AI testing and to tackle related chemical, nuclear, and cybersecurity risks.

Firms react to safety consortium

Participants in the consortia praised the initiative as an important step towards building responsible AI.

“The new AI Safety Institute will play a critical role in ensuring that artificial intelligence made in the United States will be used responsibly and in ways people can trust,” Arvind Krishna, IBM’s chairman and CEO, said in a statement.

Azon’s senior VP of global public policy and general counsel, David Zapolsky, said in a blog post that the company’s involvement with the consortium will help deliver a new standard for AI.

That standard “will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote the development of trustworthy AI and its responsible use,” he said.

John Brennan, public sector general manager at Scale AI, said in a statement that the company applauds the AI safety consortium for “recognizing that test and evaluation and red teaming are the best ways to ensure that AI is safe, secure, and trustworthy.”

A red team is a group that pretends to be an attacker, intrudes into a company’s security infrastructure as instructed by that firm, then reports back so the entity can patch vulnerabilities in its software. The technique has been used for years to identify new risks in cybersecurity.

AI controversy

Generative AI is a type of artificial intelligence technology that, as the name suggests, can generate a myriad of content forms like text, images, and audio. This is the kind of tech behind OpenAI’s ChatGPT and Google’s Bard, now known as Gemini.

But the novel technology has been blighted by issues of misinformation. AI models have been found to be adept at creating convincing but entirely fabricated material.

The tech industry calls these sort of confident lies “hallucinations.” Microsoft’s Bing AI Chat lost it completely earlier this year, and a U.S. lawyer got into trouble after citing false information that he got from ChatGPT. There are many examples of AI hallucinating.

Researchers highlighted the inherent biases present in the data used to train AI systems like ChatGPT, leading to responses that perpetuate stereotypes. For example, an AI meme maker filter on TikTok came across as fatphobic, telling a plus-size woman, “You need a bigger beach.” Some AI has even been accused of racism.

Perhaps the most chilling aspect of generative AI lies in its use for malicious activities. Deepfakes, realistic video and audio forgeries, have raised concerns about privacy, abuse, political manipulation, and the erosion of trust in online media.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×