The United States has started contemplating regulation of AI, following the lead of China and Europe. AI has become the tech buzzword of the year, with multiple giants embracing this trend.
The November launch of ChatGPT saw the AI-powered chatbot become the fastest-growing consumer app in history, hitting more than 100 million monthly active users.
2/ 🤔 US studies possible rules to regulate AI
-NTIA studying potential rules for AI regulation.
-Seeking input on AI accountability mechanism.
-NTIA advises White House on telecoms and info policy.— NextBigWhat 🚀 (@nextbigwhat) April 12, 2023
Following the success of ChatGPT, industry leaders such as Google, Microsoft, Baidu, and Alibaba, have commenced development of similar products.
However, the rapid growth and skyrocketing AI buzz have also grabbed the attention of authorities.
Also Read: Will Pause on GPT Development Hinder China’s AI Progress?
The National Telecommunications and Information Administration, an agency under the Commerce Department that advises the White House on telecommunications and information policy, is seeking feedback on the need for an “accountability mechanism” for AI.
Evidently, regulatory interest in this area is rising in tandem with ChatGPT user numbers.
US seeks trustworthy and safe AI
The agency is looking to explore possible measures that could be implemented to ensure AI systems are trustworthy, legal, ethical, effective, and safe.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.
Last week, President Joe Biden stated it is yet to be determined whether AI poses any danger.
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” said Biden.
The agency is seeking input on an “accountability mechanism” for AI, stating there is “growing regulatory interest” in this area.
NTIA plans to draft a report that looks at “efforts to ensure AI systems work as claimed – and without causing harm.”
The report aims to inform the Biden Administration’s work to create a cohesive federal government approach to AI-related risks and opportunities.
You may be right. It just seems that the minute you say, "We know you value us so little that you'd replace us with AI if you could, so here's how we would regulate that," you're conceding to something so broken that it's hard to imagine any negotiation proceeding in good faith.
— Mark Harris (@MarkHarrisNYC) March 22, 2023
Meanwhile, the Center for Artificial Intelligence and Digital Policy has asked the US Federal Trade Commission to halt OpenAI’s release of GPT-4, calling it “biased, deceptive, and a risk to privacy and public safety.”
Public appreciation for study over pause
Over 1,000 tech leaders have written an open letter calling for a pause on all major AI development and training until developers can better understand how these technologies function.
Elon Musk and Steve Wozniak were among the over 1,377 notable figures who signed a letter expressing concern over the development of autonomous weapons.
Other signatories include AI experts from Google and Meta, as well as prominent computer science professor Stuart Russell and Turing Prize winner Yoshua Bengio.
The letter also features the CEOs of tech companies and major scientists.
However, the public appears to appreciate governments’ efforts to study, rather than pause, AI development.
“Good thing they start with that! I don’t think it was a good idea to pause the development, but for governments to explore how to deal with the phenomenon is really good,” stated a Redditor.
Most of the US government couldn’t even spell AI, nevermind regulate it.
— Douglas Karr (@douglaskarr) April 5, 2023
Another Redditor commented on the challenges of regulating AI in a timely manner, stating “the pace at which legislation is discussed, argued about and written…if you’ve ever heard the phrase ‘like molasses in January…'”
The user likened trying to regulate AI to “trying to catch water in a sieve,” suggesting that new AI advances could outpace regulations that are slow to be developed and implemented.
The European Union has already announced plans to limit the spread of targeted political ads based on personal characteristics.
Meanwhile, the UK has published proposals for regulating AI, with a focus on ensuring transparency and accountability in the use of such technologies.
“We didn’t properly regulate social media out of the gate and it has damn near done us in. Not sanguine we’ll do better with AI but certainly hope so,” tweeted Brennan Gilmore.
Now it remains to be seen how the U.S. will shape the future of AI.