The antitrust watchdog of the UK, the Competition and Markets Authority (CMA), outlined seven principles to ensure a fair and competitive AI ecosystem on Sept. 18.
The discussion of regulating AI is a hot topic around the world, as 2023 has become the year of generative AI.
The governing body in the UK has unveiled the principles to kick-start a project to tighten the scrutiny of a technology that is growing rapidly.
The set of principles comes after the initial review of foundation models (FMs), which “aim to ensure consumer protection and healthy competition are at the heart of responsible development and use of FMs.”
“FMs are AI systems with broad capabilities that can be adapted to a range of different, more specific purposes. Recent developments in FMs and their rapid adoption across several user applications (such as ChatGPT and Office 365 Copilot) highlight their potential to spur innovation and economic growth,” reads the press release issued by CMA.
The regulator believes that “AI could quickly affect the market and consumers, following a wide-ranging preliminary review.”
The outlined seven principles are expected to regulate AI systems trained on vast pools of data that can be adapted to various tasks, known as FMs.
The impact of FMs could wider market range
The CMA’s report highlights the potential benefits for individuals and businesses when the development and use of FMs are successful.
These advantages may include the creation of new and improved products and services, easier access to information, breakthroughs in science and healthcare, and lower prices.
Furthermore, the report suggests that FMs could enhance market competition, enabling a wider range of firms to compete successfully and challenge established market leaders.
“This vibrant competition and innovation would benefit the economy as a whole, helping to increase productivity and generate economic growth,” reads the release.
The report also warns that if competition remains inadequate or if developers disregard consumer protection laws, there could be negative consequences, like people and businesses being harmed.
“For example, people could be exposed to significant levels of false and misleading information and AI-enabled fraud. In the longer term, a handful of firms could use FMs to gain or entrench positions of market power and fail to offer the best products and services and/or charge high prices,” said CMA.
What are the guiding principles?
The governing body has mentioned some basic principles, including accountability, access, diversity, choice, flexibility, fair dealing, and transparency.
Under these unveiled principles, FM developers and deployers are held accountable for the outcomes they deliver to consumers.
Furthermore, there’s an emphasis on ensuring ongoing and unhindered access to essential components, promoting diversity in business models, and providing ample choices for businesses to decide how they utilize FMs.
Flexibility in adapting or utilizing multiple FMs based on specific needs is also a central theme.
Additionally, the principles emphasize fair practices, including the prevention of anti-competitive behaviors such as self-referencing, tying, or bundling.
Lastly, transparency is paramount, with a focus on providing consumers and businesses with comprehensive information about the potential risks and limitations associated with FM-generated content, enabling them to make well-informed decisions.
In another piece of AI regulation news, Assemblymember Ash Kalra in California is championing legislation to protect artists‘ rights by restricting the unapproved use of AI in replicating actors.