OpenAI secretly lobbied the European Union (EU) to weaken large parts of its AI Act in order to reduce the company’s regulatory burden, according to a report from Time. In public, meanwhile, CEO Sam Altman travelled the world calling for stronger AI guardrails.
The European Parliament, the main legislative body of the EU, approved the proposed law on June 14, making the 27-nation bloc possibly the first major economic power to put in place comprehensive AI regulations.
The law, which is not expected to take effect until 2025, restricts the use of AI systems that are considered to be high-risk, such as facial recognition software. It also requires firms that develop AI like ChatGPT to disclose more details about the data used to train the chatbots.
Also read: European Union Passes Draft Law to Regulate AI
EU accommodates OpenAI’s amends
Several of OpenAI’s proposed amendments to the AI Act were incorporated into the final text of the law, the Time report said, citing documents about the company’s private engagement with EU officials, which it obtained from the European Commission.
In a 2022 submission, the American company argued for changes that would allow AI providers like itself to quickly update their systems for safety reasons. It also wanted carve-outs that provide for certain uses of generative AI in education and employment.
CEO Sam Altman argued that OpenAI’s general-purpose AI systems, such as GPT-3 and Dall-E 2, should not be considered “high risk” under the AI Act. The designation would subject the firm to stringent legal requirements, including providing details about how its systems work.
Altman, who recently visited several countries calling for tougher AI regulations, believes that these requirements would be burdensome for OpenAI, and could stifle innovation. In a seven-page White Paper sent to EU Commission and Council officials in Sept. 2022, OpenAI said:
“By itself, GPT-3 is not a high-risk system. But [it] possesses capabilities that can potentially be employed in high risk use cases.”
“We believe our approach to mitigating risks arising from the general purpose nature of our systems is industry-leading,” the White Paper says. “We are concerned that proposed language around general purpose systems may inadvertently result in all our general purpose AI systems being captured [as high risk] by default.”
As the report points out, OpenAI’s lobbying efforts have been successful to some extent. The latest draft of the AI Act does not explicitly classify general-purpose AI systems as “high risk.” But it still requires that the programs be designed in a way that minimizes the risk of harm.
OpenAI’s double-standards criticized
In May, Sam Altman told reporters in London that OpenAI could leave Europe if it found the regulation unbearable. He later rescinded the warning, saying that the company intends to cooperate with the European Union.
OpenAI joins the likes of Microsoft and Google in calling for softer EU regulations for large-scale AI providers. But critics accused the startup of hypocrisy. Senior policy advisor for European Digital Rights Sarah Chander said OpenAI got what “they asked for”.
“OpenAI, like many Big Tech companies, has concealed their financial interest in weakening the regulation by using the utility and public benefit of AI argument,” Chander told Time.
Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office, said what OpenAI “is saying basically is that trust us to self-regulate.”
“It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no,” he stated, per the report.
Defending lobbying efforts
OpenAI defended its lobbying efforts. A spokesperson told Time the company is simply trying to ensure that the AI Act is effective and proportionate. The company also said that it is committed to developing AI that is safe and beneficial to society.
“At the request of policymakers in the EU, we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience,” the spokesperson said.
“Since then, the [AI Act] has evolved substantially and we’ve spoken publicly about the technology’s advancing capabilities and adoption. We continue to engage with policymakers and support the EU’s goal of ensuring AI tools are built, deployed and used safely now and in the future.”