A U.S. government analysis has reportedly raised concerns about the European Union’s (EU) upcoming AI law, warning that it could hamper investment in the emerging technology and favor large AI companies over smaller rivals.
In previously undisclosed documents, the U.S. State Department warns that some rules in Europe’s AI Act are based on terms that are “vague or undefined,” according to Bloomberg, which obtained the documents.
Also read: European Union Passes Draft Law to Regulate AI
AI Act stifles ‘research and development’
In June, the European Parliament, the main legislative body of the EU, approved a new law to regulate artificial intelligence. Known as the AI Act, the law will restrict the use of AI systems that are considered to be high-risk, such as facial recognition software.
It will also require firms that develop AI systems like ChatGPT to disclose more information about the data used to train the bots. The law takes effect in 2025. The EU’s three branches of power—the Commission, Parliament, and Council—will all have to agree on its final version.
Focusing on the European Parliament’s version of the AI Act, the U.S. analysis said the new AI regulations would benefit the largest tech companies that have the financial wherewithal to train machine learning systems. Smaller firms are likely to suffer losses.
The analysis further warns that EU rules risk “dampening the expected boost to productivity and potentially leading to a migration of jobs and investment to other markets.”
According to the State Department, the new regulations would also likely curb “investment in AI R&D and commercialization in the EU, limiting the competitiveness of European firms,” because training large language models is expensive in many ways.
The United States has yet to develop its own AI rules, but one of its major concerns with the EU regulations is that the European Parliament focused on how AI models are developed. Per the Bloomberg report, the U.S. wants to regulate how such models are actually used.
The U.S. has shown concern about the EU AI law in the past. When the EU Commission first proposed the AI Act in 2021, America objected. In May, U.S. Secretary of State Antony Blinken pushed back against several of the EU Parliament’s proposals to control generative AI during a meeting with Commission officials in Sweden.
What does the EU’s AI law say?
The US State Department’s analysis, which includes “a line-by-line edit of certain provisions in the law,” was reportedly shared with European counterparts in recent weeks. The U.S. has not publicly commented on the feedback, though a State Department official was quoted saying:
“We do not comment on purportedly leaked information,” said the spokesperson, as reported by Computer World.
“Through our strong and continuous partnership with the European Union, we seek digital solidarity on key bilateral issues.”
Speaking about the AI Act in June, European Parliament President Roberta Metsola said the adoption of the new rules showed Europe’s commitment to the responsible development of artificial intelligence with a “balanced and human-centered approach.”
Making history by shaping our future.
This is what today's groundbreaking vote on the world’s first ever #AI legislation is all about.
It is about Europe taking the lead in digital innovation. pic.twitter.com/jICNwcX9hy
— Roberta Metsola (@EP_President) June 14, 2023
The current draft of the European Parliament’s AI Act proposes a risk-based approach to regulating artificial intelligence systems. AI systems would be categorized into different levels of risk based on their potential to harm consumers.
According to the law, the lowest-risk category relates to AI used in video games or spam filters. The highest-risk category includes AI that could be used for social scoring, a practice that assigns scores to individuals, either for loans or housing, based on their behavior.
The EU says it will ban such programs. Companies that develop or use so-called high-risk AI would be required to provide information about how their systems work. This is done to ensure that AI programs are fair and transparent and that they do not discriminate against individuals, the rules say.
The AI Act comes amid warnings from some experts that artificial intelligence could pose a threat to humanity if it is developed too quickly.