AI Firms Voluntarily Commit to AI Safeguards at the White House

AI Firms Voluntarily Commit to AI Safeguards at the White House

As the AI wave continues to boom with new updates every day, big AI focused tech companies have voluntarily agreed with the White House on responsible AI developments.

AI Companies were scheduled to meet at the White House on Friday, July 20, and make their voluntary agreements, upon which President Biden would also make his remarks. The promises include investments towards cybersecurity and watermarking systems to show if content is AI generated or not.

Since the launch of ChatGPT by OpenAI in November last year, there has been a spate of new AI systems being unleashed onto the market, much to the concerns of many who are worried about the potential risks the technology may pose on humanity. This has led global leaders to scramble to come up with regulatory frameworks to govern the industry in a way that encourages innovation but in a safe manner.

Big names

In light of this, executives and presidents of seven big tech companies – Meta, Microsoft, OpenAI, Amazon, Google, Anthropic, and Inflection have agreed to address the many risks that AI poses to humanity.

“US companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure and trustworthy,” White House chief of staff Jeff Zients told NPR.

“We will use every lever that we have in the federal government to enforce these commitments and standards. At the same time, we do need legislation,” he added.

However, it is not yet clear how the government will ensure the companies meet their commitments and the possible course of action in the case of failure to meet such commitments.

Also read: ChatGPT’s Performance Declines: The Quest for Balance

The commitments

The commitments are centered around safety, information sharing and transparency as well as report vulnerabilities as soon as they arise.

AI Firms Voluntarily Commit to AI Safeguards at the White House

According to reports, the White House sees the development as an attempt towards striking a balance between the supposed AI benefits and the risks associated with the technology.

This comes as the government has been lobbying for safeguards to be put in place. Zients also pointed out the need for pressure testing products, safeguard the market against cyberattacks and discrimination against particular groups of people.

The tech firms themselves have committed to third party testing of their products before their release, although there isn’t clarity yet on who will be the third-parties and how they will be selected.

Additionally, firms are taking the responsibility to ensure users can distinguish between AI-generated content and original human made content.

Last month, the EU asked online platforms to label AI generated content amid moves to combat disinformation.

A starting point

TS2 says the voluntary commitments expose the limitations of “what the Biden administration can do to regulate advanced AI models.”

The White House however views this as a stepping stone.

“The commitments the companies are making are a good start, but it’s just a start,” said Zients.

“The key here is implementation and execution in order for these companies to perform and earn the public’s trust.”

However, there are concerns the approach to have the big techs greatly involved in quest to regulate the sector is disastrous as the companies will seek to benefit more.

Ifeoma Ajunwa, a law professor at Emory studying the intersection of technology and work said this was “disappointing.”

“We also want to ensure that we are including other voices that don’t have a profit motive,” she said. “We should definitely invite corporate leaders and tech titans to be part of this conversation, but they should not be leading the conversation.”

Political economy professor at University of Washington Victor Menaldo expressed concerns involving big tech firms in regulatory framework making process is risky as they may take advantage of this to elbow out upcoming businesses.

“The bigger established firms can kind of game it to benefit them, and the newcomers don’t have a say,” he said.

“Big companies love to do these kind of things because they’re already established, so they’re like ‘Oh, the rules of the road are going to benefit us.’”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.