EU’s AI Regulation Framework: A New Era in Digital Governance

EU's AI Regulation Framework: A New Era in Digital Governance

The European Union has taken a step in the digital era by finalizing a legal framework for regulating artificial intelligence (AI). This development emerged after intense negotiations spanning 37 hours involving the European Parliament and EU member states.

The deal, described by European Commissioner Thierry Breton as “historic,” will guide AI and extend its reach to social media and search engines, affecting industries like X, TikTok, and Google. Breton’s responsibility and the effort of the 100 individuals took three days to conclude this agreement.

Spain’s Secretary of State for AI, Carme Artigas, facilitated these discussions. The agreement, backed by EU countries like France and Germany, was reached amidst concerns from tech companies advocating for more lenient regulations to encourage innovation, especially among smaller firms.

EU leading the global AI regulatory race

This move places the EU at the top of AI regulation, surpassing other global players like the US, China, and the UK. The details of the law, set to be enacted no earlier than 2025, remain sparse. Still, its implications are vast, focusing on protecting the public from the potential dangers associated with rapidly advancing AI technologies.

The political agreement was won, with debate over general-purpose foundation models and specific-use AI systems. A point was AI-driven surveillance, used by police, employers, or retailers for real-time public monitoring and emotional stress recognition.

The European Parliament banned real-time surveillance and biometric technologies, including emotional recognition. However, exceptions were made for urgent situations like unexpected terrorist threats, searches for missing persons, and crime investigations.

MEP Brando Benefei and Dragoș Tudorache, who led the Parliament’s negotiating team, emphasized the need for independent authorities to oversee predictive policing, thus protecting against misuse by law enforcement and upholding the presumption of innocence.

Ensuring a Human-Centric AI approach

Benefei said the goal was to create legislation promoting Europe’s human-centric, rights-respecting AI ecosystem. This aligns with the European vision of harnessing the AI revolution’s benefits while upholding fundamental human values and rights.

Tudorache clarified that the intention was not to deprive law enforcement of necessary tools but to prohibit AI technologies that could preemptively identify criminals.

A risk-based regulatory framework

The agreement introduces a tiered system for regulating AI, with high scrutiny applied to technologies posing health, safety, and human rights risks. Initially, the focus was on systems with over 10,000 business users. However, this has shifted to a measurement based on the computational intensity required to train the machines, indicated by “floating point operations per second” (Flops). Only one model, GPT-4, currently fits this high-risk category.

This regulation is expected to set a standard, influencing how other governments approach AI governance. Anu Bradford, a Columbia Law School professor and expert in EU digital regulation, believes that while not all aspects of the EU model will be universally adopted, many of its core principles will likely be emulated worldwide. Companies conforming to EU regulations might extend similar practices to other markets, as developing separate models for different regions is inefficient. Although it’s not guaranteed to replicate every single provision, there’s a likelihood that numerous aspects of it will be adopted.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Recommended topics

Related articles

Latest Video