In a distinct move highlighting the intricate relationship between generative artificial intelligence (AI) systems and intellectual property rights, Google has publicly committed to providing a legal shield to its users.
In an era where the potential for unintentional copyright infringement looms large in the utilization of generative AI, this decision unfolds as significant and requisite. Microsoft and Adobe have trodden similar paths, signaling a cascading effect among tech behemoths to assuage user fears and legal anxieties regarding copyright issues tied to AI.
Shared fate: Protecting customers with generative AI indemnification. If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.https://t.co/c3cR5HW4NH
— Phil Venables (@philvenables) October 13, 2023
Google’s Legal Protection to Customers
According to a recent blog post by Google, the tech firm has ensured legal protection for customers engaging with products integrated with generative AI capabilities. However, this protection is only some-encompassing since it is circumscribed to seven specifically outlined products, notably excluding Google’s Bard search tool from the safety net.
A discerning look into the products enveloped by this shield reveals names like Duet AI in Workspace, Vertex AI Search, Visual Captioning on Vertex AI, and others.
With this legal commitment, Google has proclaimed,
“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”
However, the purview of this assurance is twofold, encompassing both training data and the outcomes derived from foundational models.
Strategy of Intellectual Property Indemnification
This dual-pronged strategy emerges as a pioneering endeavor in intellectual property indemnification in AI. Firstly, Google’s assurance pertains to the use of training data. While this is not a groundbreaking form of protection, Google has recognized the need to explicitly affirm that it extends to scenarios where the training data incorporates copyrighted material.
Hence, if allegations arise asserting that Google’s use of training data infringes upon third-party intellectual property rights, Google is poised to assume the mantle of responsibility.
Ensuring Ethical and Responsible Use of AI Tools
Secondly, the strategy shields users against potential legal action resulting from the outcomes produced while using its foundational models. This step indicates a layer of protection for scenarios where users generate content that mimics published works. However, this safeguard is predicated on users not generating or using content to breach others’ rights.
Moreover, it’s crucial to highlight that while this protection is a robust framework, it demands responsible user AI practices. Indemnity is contingent upon users not intentionally creating or utilizing generated output to infringe upon others’ rights. It also necessitates adherence to responsible AI practices and the employment of tools, for instance, to cite sources judiciously and ethically.
The Intriguing Case of the Excluded Bard Search Tool
Inclusivity in protection also covers claims from generated output or from Google’s use of training data in creating its generative AI models. By navigating through this two-pronged strategy of generative AI indemnity protections, Google seeks to provide balanced, practical coverage against potential claims, thereby fortifying user confidence in employing generative AI products.
Could Bard one day replace Google's Search Box? It sounds laughable given the propensity for hallucination.
Google just published a new paper that shows they can greatly improve the accuracy of chatbots like Bard by augmenting the prompt with information retrieved from a Google… pic.twitter.com/COiaSlMnjY
— Marie Haynes (@Marie_Haynes) October 10, 2023
However, the excluded Bard search tool, Google’s AI chat service, reveals an insightful peek into this protective shield’s limitations and selected applicability. Although potent and capable of myriad functions, such as generating computer code, crafting essays, and more, the bard does not fall under Google’s intellectual property indemnity for generative AI.
Google’s commitment is a significant stride in this landscape where AI and copyright intermingle. Moreover, it doesn’t merely offer a safeguard but also demonstrates an effort to cultivate and fortify trust within its user base.
Consequently, while these indemnities provide potent protections, Google maintains a continuous dialogue with its customers, exploring specific use cases that might necessitate additional coverage.
Significantly, Google’s concerted effort to shield its users from potential copyright quagmires reflects a larger, industry-wide momentum towards ethical and legally sound AI practices. However, the discerning exclusion of products like Bard illuminates these legal safeguards’ intricate and selective application.