Search

Google and Anthropic Develop AI Safety Standards

Google and Anthropic Develop AI Safety Standards

As debates on responsible development and use of AI continue to take center stage at global conferences, Google is collaborating with US start-up Anthropic on a number of initiatives, including AI guardrails.

The collaboration focuses on AI safety standards, utilizing Google’s TPU v5e accelerators for AI inference, and a shared commitment to the highest standards of AI security.

Protection against hackers

According to Metaverse Post, Anthropic will use Google’s most recent Cloud TPU v5e chips to power its large language model known as Claude as part of this agreement. This is expected to provide high-performance and reasonably priced solutions for medium- and large-scale AI training and inference activities.

A Bloomberg report indicates Anthropic has also agreed to spend more than $3 billion on Google Cloud services over the next four years in a deal that includes only TPUs and other offerings.

According to Google, Anthropic will use Chronic Security Operations to protect its cloud environment against hackers. That’s a Google Cloud service “designed to ease the task of detecting cyberattacks.”

Silicon Valley reports that Anthropic, which has used Google Cloud since 2021 when it launched, is also adopting the Security Command Center. This is a tool that helps firms fix vulnerabilities and insecure configuration settings in their cloud environments.

“Anthropic and Google Cloud share the same values when it comes to developing AI—it needs to be done in both a bold and responsible way,” said Google Cloud CEO Thomas Kurian.

“This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely and provides another example of how the most innovative and fastest-growing AI startups are building on Google Cloud,” he added.

This collaboration builds on the longstanding relationship between the two companies, with Anthropic previously relying on Google’s services such as the Google Kubernetes Engine, AlloyDB, and BigQuery to support its AI research and operational needs.

Also read: Metaverse Gathering Marks the Reopening of Coventry University’s AME

The timing

The announcement comes after both Google and Anthropic participated at the inaugural AI Safety Summit (AISS) at Bletchley Park, hosted by the UK government. Senior executives from Google and Anthropic attended the summit together with other industry leaders, government officials, civil society, and experts to address concerns related to advanced AI and strategies for responsible development.

According to Google Cloud, this builds on Google and Anthropic’s respective collaborations “with Frontier Model Forum, an initiative to support the development of better measures for AI safety.”

This further underscores the commitment of Google and Anthropic to advancing the dialogue on AI safety and aligning it with societal values.

“Our longstanding partnership with Google is founded on a shared commitment to develop AI responsibly and deploy it in a way that benefits society,” said Anthropic CEO Dario Amodei.

“We look forward to our continued collaboration as we work to make steerable, reliable, and interpretable AI systems available to more businesses around the world,” added Amodei, who is also Anthropic’s co-founder.

Both tech firms are also working with a non-profit group called MLCommons, which works to create trustworthy third-party data about AI systems.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×