Search

US and UK to Announce Collaboration on AI Safety

US and UK to Announce Collaboration on AI Safety

The US and the UK will closely collaborate on AI guardrails in a new partnership that seeks to promote AI development and use for the benefit of users.

This comes as world leaders acknowledge the potential AI has for transforming economies but also recognize its perils, such as privacy risks, the proliferation of misinformation, bias, and discrimination against certain groups of people.

Both countries are to announce the collaboration this Wednesday, Nov. 1, barely a week after US President Biden issued an executive order on the tech to protect the masses. This new collaboration is expected “to marry the guardrails” announced in the executive order with existing work by the UK’s Frontier AI Taskforce.

VP Kamala Harris to lay out an AI strategy

A UK official close to the planning of the UK’s AI safety summit to be held at Bletchley Park on Nov. 1 and 2 told Politico of the developments.

“We plan to announce close bilateral collaboration with the US Safety Institute this week.”

According to the Politico report, the two countries are expected to announce their own versions of the institutes at the same time the AI safety summit commences.

Representing the Biden administration will be his deputy, Vice President Kamala Harris, who will lay out the US AI strategy on Wednesday, outlining the risks, making a call for international cooperation, and announcing the US AI Safety Institute. A US official who was granted anonymity to discuss the internal plans also indicated the institute will be housed at the Department of Commerce.

“It will work to create guidelines, standards, and best practices for evaluating and mitigating the full spectrum of risks,” said the US official.

“We must address the full spectrum of risk, from potentially catastrophic risks to societal harms that are already happening, such as bias, discrimination, and the proliferation of misinformation.”

VP Harris is also expected to state a political declaration on the military use of AI in the US and a $200 million philanthropic initiative to ensure AI is used in the public interest. The fund came from ten foundations.

Also read: Neom’s $500B Project Taps Animoca for Blockchain and Metaverse Expertise

Collaborating in research

As part of the collaborative efforts, both the US and the UK will “also participate in information sharing and research collaboration,” according to the US official. Both countries will make their own announcements, while the US is expected to share information with other likeminded institutes in other countries.

In the meantime, UK Prime Minister Rishi Sunak has indicated that he will set up an AI Safety Institute that will be responsible for evaluating, examining, and testing new types of emerging technologies.

The institute, according to the British Premier, will build on the work of the existing Frontier AI Taskforce, which he said has already been granted “privileged access” to the technology models of leading AI companies like Anthropic, Google DeepMind, and OpenAI.

On the other hand, the US executive order signed on Monday requires that developers of AI systems share safety test results and other critical information with the government. They are also supposed to ensure that their AI systems are safe before they are made public.

According to the order, an “AI Safety and Security Board” will be set up for implementation of the requirements.

US leading the industry

Commenting during an episode of the POLITICO Tech podcast about the timing of the executive order’s release, which coincided with the UK AI summit week, White House adviser on AI Ben Buchanan said VP Harris’ presentation will also show how the US views AI policy and governance as well as the need to collaborate with other countries.

“We’re trying to lead with substance here, and we’re trying to engage with other countries with substance, and this is a vision,” he said.

The UK’s’ tech secretary, Mochelle Donelan, also concurred that it was easier for the US to lead the industry in a more transparent and safe way as the industry is dominated by US firms.

“I know America and other countries will have plans for institutes too, but we can do it a lot quicker because we already have that initial organization in the Frontier AI Taskforce,” she said.

“We’ve already got that expertise setup, funding in there, and our processes allow us to do that at a quicker speed.”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×