Europe’s much-awaited AI Act is unlikely to pass regulation before December as lawmakers fail to agree on how to regulate so-called foundation models such as OpenAI’s ChatGPT, Reuters reported this week, citing sources.
One of the key challenges involves the definition of foundation models. Spain, the current EU president, is pushing for strict rules and has proposed a system of tiered regulation, with the most stringent rules applying to models that can potentially harm a large number of people.
EU lawmakers fail to reach deal on AI – for now
In June, the European Parliament, the main legislative body of the EU, approved a new law to regulate artificial intelligence, called the AI Act. The law aims to restrict the use of AI systems that are considered to be high-risk, such as facial recognition software.
It will also require companies that develop AI models like ChatGPT or Bard to disclose more information about the data used to train the chatbots. The draft AI rules have to be agreed upon by all of the EU’s three branches of power: the EU Commission, EU Parliament, and EU Council.
According to Reuters, European lawmakers have discussed the AI Act on three occasions in “trilogues,” a three-party meeting involving all three of the European Union’s branches of power, to work out a final version of the proposed law.
A fourth trilogue was expected to take place on Oct. 24, but details of that meeting could not be immediately established. Another trilogue meeting has already been slated for December because “a final agreement is unlikely to be reached in that [fourth] meeting.”
This has prompted concern that the AI Act, first proposed by the EU Commission in 2021, may be delayed until 2024 due to a lack of consensus. Lawmakers in Europe wanted to pass the law before year-end.
The Spanish government wants the final version of the AI Act to pass regulation during its presidency of the EU, which ends in December, and “has proposed compromises in a bid to speed up the process.”
It includes a “tiered approach” for regulating foundation models that have 45 million users or more, as reported by Reuters. Spain also wants “additional obligations for very capable foundation models, including regular vetting to uncover potential vulnerabilities.”
Foundation models are AI programs trained on large amounts of data from the Internet, like ChatGPT or Google’s Bard. They are capable of learning new material from new data in order to perform several tasks, such as writing poems, solving math problems, and so forth.
Some European Union member states argue that foundation models should be defined based on their size and complexity, while others believe that they should be defined based on their potential to be used for harm.
Another challenge is that such models are constantly evolving, making it difficult to develop regulations that will remain effective over time. Key points for Europe’s AI Act include requirements for developers to assess risks, test models, and examine bias.
However, smaller companies are concerned about the difficulty of complying with the new rules. Earlier this month, the U.S. government raised concerns about the AI Act, warning that it could hamper investment in the technology and favor large AI companies over smaller rivals.
The State Department warned that some rules in Europe’s AI Act are based on terms that are “vague or undefined.” The EU’s AI law is the first comprehensive attempt to regulate AI. The Act is expected to have a significant impact on entities that develop and use AI technologies.