OpenAI CEO Sam Altman appeared before Congress on Tuesday to testify about the dangers and opportunities of AI, saying there is “urgent” need for regulation.
The 38-year-old executive largely agreed with the members of a Senate subcommittee on the need to tame the increasingly powerful AI created by his company and other big tech firms like Google and Microsoft.
Also read: How to Use WhatsApp’s New ‘Chat Lock’ Feature
In his testimony, Altman implored lawmakers to regulate AI as members of committee displayed a budding understanding of the technology.
The hearing also underscored the deep unease felt by technologists and government over its potential harms.
“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful (AI) models,” he said.
Altman’s appearance comes after the viral success of ChatGPT, his company’s chatbot tool, which provoked an arms race over AI and sparked concerns from some lawmakers about the risks posed by the technology.
A list of tech firms from across the globe have deployed new AI tools in recent months, with the potential to change how people work and interact. The same tools have also sparked criticism over their potential to disrupt millions of jobs, spread misinformation, and perpetuate biases.
AI as a tool for manipulation
The OpenAI boss said the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better.”
Prior to the hearing, Altman also talked about OpenAI’s technology at a dinner with dozens of House members on Monday night, and reportedly met privately with several senators.
According to a New York Times report, Altman offered a loose framework to manage what happens next with the fast-developing systems, which some believe could fundamentally change the economy.
“I think if technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with government to prevent that from happening.”
Asked about AI’s impact on music, Altman said that content creators should have a say in how their voices, likeness or copyrighted content are used to train AI models.
1. Sam Altman testifies before Senate
OpenAI CEO Sam Altman and other industry leaders just testified on artificial intelligence before a Senate Committee hearing. pic.twitter.com/TQ2HKX8um8
— Rowan Cheung (@rowancheung) May 17, 2023
He also told the committee that his company is working on a copyright system to compensate artists whose work was used to create something new.
“Creators deserve control,” he stated, adding that regulation should demand images state when they have been AI generated. However, some lawmakers asked if OpenAI was doing enough, questioning why the company couldn’t implement that immediately.
Lagging behind on regulation
Speaking to reporters after the hearing, chairman of the Senate panel Senator Richard Blumenthal said the hearing was the first in a series to learn more about the potential benefits and harms of AI, to eventually “write the rules.” He added that Altman seemed “to be pretty sincere,” stressing that “Congress cannot be the gatekeeper” of regulating AI.
Senator Blumenthal concedes that someone else needs to step up and play the regulatory role because “the Federal Trade Commission doesn’t have the capacity right now.”
The senator also acknowledged Congress’ failure to keep up with the introduction of new technologies in the past.
“Our goal is to demystify and hold accountable those technologies to avoid some mistakes of the past. Congress failed to meet the moment on social media.”
Members of the subcommittee suggested an independent body oversee AI and implement rules that force companies to disclose how their models work and the data sets they use. As well as antitrust rules to prevent companies like Microsoft and Google from monopolizing the market.
Taking a leaf from the EU
AI-skeptic professor Gary Marcus said the US and others had “acted too slowly with social media” regulation but have choices to make regarding AI, proposing a new cabinet level agency to regulate the sector, an idea Altman also appeared to back.
IBM’s chief privacy and trust officer Christina Montgomery said the EU’s rules on AI are “regulating by context” and provided a good lead for the US to follow.
AI regulation has been a topic of great interest since the sector’s ChatGPT-driven boom, with a slew of new tools coming onto the market.
In Europe, lawmakers are set to introduce rules that regulate the sector later this year while in China, the country has come up with AI regulations that comply with its censorship laws.
Tech stakeholders like Elon Musk signed a letter calling for a halt in AI development until there is clarity on regulation, citing its potential harm to humanity.
Senator Blumenthal said AI companies should lead with a “do no harm” approach – but conceded there would be no pause in AI development until regulators can catch up.