Hugging Face, GitHub, and others have called on the European Union to ease regulations to avoid crimping the development of open-source AI models, as Europe finalizes its AI Act. Creative Commons, EleutherAI, LAION, and Open Future also support the effort.
In a paper to EU policymakers, the six companies suggest ways on how to make the AI Act “work for open source.” They also want related AI development practices not to be burdened with “obligations that are structurally impractical to comply with or counterproductive.”
Also read: European Union Passes Draft Law to Regulate AI
Open source AI innovation under threat
As VentureBeat reported, the paper berates “overbroad obligations” that prioritize closed and proprietary AI development – like the models from ChatGPT-creator OpenAI and Google. It says such rules “threaten to disadvantage the open AI ecosystem.”
The companies argue that open-source AI models are more likely to be robust and reliable, as they can be tested and improved by a wider community of developers. They also believe open-source development will lead to more innovative artificial intelligence products.
“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation,” the companies wrote in the paper titled, ‘Supporting Open Source and Open Science in the EU AI Act,’ which was released on July 26.
“By supporting the blossoming open ecosystem approach to AI, the regulation has an important opportunity to further this goal.”
Yacine Jermite, machine learning and society lead at Hugging Face, told VentureBeat that “it is important for people to be able to choose between base models, between components, to mix and match as they need.”
“Openness by itself doesn’t guarantee responsible development. But openness and transparency [are] necessary [for] responsible governance – so it is not that openness [should be] exempt from requirements, but requirements should not preclude open development,” he said.
The EU AI Act is poised to become a foundational piece of AI regulation. @aviskowron has spent the past two months working with open source advocates and policy experts at @github @OpenFutureEU @huggingface and @creativecommons to ensure that it works for the OS AI community. https://t.co/4pPzl8Dsu0
— EleutherAI (@AiEleuther) July 26, 2023
Tough European laws
The European Parliament, the main legislative body of the EU, approved the proposed AI Act on June 14, making the 27-nation bloc possibly the first major economic power to put in place comprehensive artificial intelligence regulations.
The law, which is not expected to take effect until 2025, proposes a risk-based approach to regulating AI programs. AI systems would be categorized into different levels of risk, based on their potential to harm users. European lawmakers are currently polishing up the Act.
According to the Act, the lowest risk category relates to AI used in video games or spam filters. The highest risk category includes AI which could be used for social scoring, a practice that assigns scores to individuals, either for loans or housing, based on their behavior.
Europe will restrict AI systems that are considered to be high-risk, such as facial recognition software. EU says it will ban such programs. The law also requires firms that develop AI like ChatGPT to disclose more details about the data used to train the chatbots.
EU AI Act is too narrow
Github senior policy manager Peter Cihon said that the European Union’s AI Act focuses too narrowly on the application layer. In its current form, he says, the law’s requirements may be burdensome for open-source developers, who are often hobbyists, nonprofits, or students.
“Ultimately, policymakers have been quite focused on one particular value chain, one particular model, and that tends to be the API model – but that doesn’t really apply in the context of open source,” Cihon explained.
Cihon expressed optimism that providing clear information about the open-source approach to development will be beneficial as the negotiations between the European Commission, the European Parliament, and the Council of the EU, which began in June, progress.
“The provisions in the sections of the [AI] Act that we’re talking about have not yet come up for discussion,” he said. “It [the EU] certainly starts the global regulatory conversation. So we’re optimistic that this can have benefits in [Washington] DC and beyond.”
Some developers of generative AI have chosen to share their models with the public, in line with the open-source ethos of collaboration and transparency. Stability AI has open-sourced its Stable Diffusion technology and Meta made Llama, a large language model, public.
Hugging Face and GitHub join the likes of OpenAI, Microsoft, and Google in calling for softer EU regulations for AI providers. As MetaNews previously reported, OpenAI secretly lobbied the EU to weaken large parts of its AI Act in order to reduce the company’s regulatory burden.