Search

UK To Respond To AI White Paper With Regulatory Tests

UK To Respond To AI White Paper With Regulatory Tests

The UK government will publish tests for new AI laws, focusing on safety and industry growth. The AI Safety Institute will play a role, with regulations triggered by safety risks or non-compliance by AI firms like OpenAI and Google.

The laws are in line with the government’s focus on resisting creating a tougher regulatory regime for fast-developing technology, according to the Financial Times. In the coming weeks, British ministers will publish the criteria, which will state the circumstances under which they would enact curbs on powerful artificial intelligence (AI) models created by Google, OpenAI, and other companies.

The UK government put in place a system to watch this, which is the AI Safety Institute, a body comprised of academics and learning experts. If the AI Safety Institute fails to identify the risks around the technology, an intervention will be triggered. Another test that can trigger legislation is if AI companies fail to uphold voluntary commitments to avoid harm.

UK government’s approach to AI

A statement published by the government confirms that the UK government’s light-touch approach to regulating AI will be reflected in the publication of these tests. According to the statement, they will take action to mitigate risks and support safe and responsible AI technology as required. The statement continued by saying that they would maintain a pro-innovation approach in close consultation with civil society and industry.

Consequently, the philosophy will run through the ‘tests’ to pass AI legislation to be proposed by the UK government with the provision that any new laws would not impair innovation without cause.

It is understood that the tests are to be published as part of the consultation process for the government’s AI white paper, published in March 2023.

The whitepaper didn’t go without criticism. Karen Yeung, a professor at the University of Birmingham and PhD candidate Emma Ahmed-Rengers, said the document was an inadequate basis for sound policy. They added that the document cannot serve as the foundation of an effective and legitimate regulatory framework in the interest of the public.

However, some other people pointed out that regulating AI in the UK has begun to be taken up by sectoral regulators. This includes Ofcom and the Information Commissioner’s Office, which have started to conduct algorithmic audits in their areas of jurisdiction.

UK active on AI norms internationally

AI companies, including OpenAI, Google, DeepMind, Microsoft, and Meta, signed several voluntary policies in November. They are concerned about the safety of their products as part of the global AI safety Summit put together by the UK government.

These companies agreed to allow the UK’s AI Safety Institute to evaluate the safety of the models that make products like ChatGPT before they are used by businesses and consumers.

Consequently, the evaluation of these models is underway, but how they will be conducted remains unclear. Also, whether the AI companies will provide comprehensive access is unclear. According to a government official, they are currently lucky as they are reliant on goodwill on both sides. The official continued by saying that the evaluation of these models is character-dependent and CEO-dependent.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×