The European Union (EU) continues to push for comprehensive laws regulating the use of generative AI technologies, including OpenAI’s well-known chatbot ChatGPT. Among the key requirements of the new set of laws, companies that deploy generative AI technologies should disclose any copyrighted material used to develop their systems.
According to Supantha Mukherjee in a Reuters article, the proposed AI Act, which has been in development for almost two years, seeks to safeguard individuals from the potential risks of cutting-edge technology, which saw a surge in interest and investment following the publication of ChatGPT.
The AI Act classifies AI tools according to their perceived risk level: from minimal to limited, high, and unacceptable. While high-risk tools will not be banned according to the proposals, their use will have to be highly transparent in their operations and comply with strict ethical and legal standards.
Among the areas of concern are biometric surveillance, misinformation, and discriminatory language. Another contentious part of the proposal is the requirement that businesses using generative AI tools, like ChatGPT or the image generator Midjourney, reveal whether or not copyrighted material was utilized to create their systems.
The regulation to disclose copyrighted material was favored as compared to completely abandoning the use of copyrighted material in training generative AI models.
This proposal is reported to have been added to the AI Act as late as a few weeks ago according to Reuters’ Supantha Mukherjee.
“Against conservative wishes for more surveillance and leftist fantasies of over-regulation, parliament found a solid compromise that would regulate AI proportionately, protect citizens’ rights, as well as foster innovation and boost the economy,” said Svenja Hahn, a European Parliament deputy.
Mixed reactions to EU proposals
According to the proposals, offenders will be fined up to 30 million euros or 6% of their global profits. So, for a company like Microsoft, which is backing ChatGPT maker OpenAI, it could be fined up to over $10 billion if found on the wrong side
The proposed regulation has already received mixed reactions from the tech industry and civil society groups. Some have praised it as a balanced and proportionate approach that will foster innovation and trust in AI, while others have criticized it as either too restrictive or too lenient.
Macquarie analyst Fred Havemeyer told Reuters that the “EU has been on the frontier of regulating AI technology,” adding the proposal was “tactful” rather than a “ban first, and ask questions later” approach.
The regulation is still subject to further negotiations between EU lawmakers and member states before it can become law.
Experts say this initiative might be what the tech industry needs as the race to bring AI tool to market has already intensified following the release of OpenAI’s ChatGPT in November last year.
Spreading like wildfire
OpenAI provoked awe and anxiety around the world as the chatbot became the fastest-growing consumer application in history, reaching 100 million monthly active users in a matter of weeks.
Many other developers and businesses have therefore been scrambling to release their own alternatives like Baidu’s Ernie bot in China, where ChatGPT is not available but already being accessed through VPNs.
The influx of the AI tools have worried some stakeholders in the tech industry with the likes of Elon Musk proposing a halt to the development of new AI systems until there are proper guidelines.
Ironically, the tech billionaire announced plans to unveil a ChatGPT alternative known as TruthGPT.
The generation of information, instruction, entertainment, and research are just a few of the numerous potential uses for generative AI. Plagiarism, bias, manipulation, and infringement of intellectual property rights are a few of the major problems it raises. The proposed rule from the EU is an effort to address these issues and make sure that generative AI is used morally and responsibly.