Connect with us

Regulation January 26, 2023

US Government Wants Google Split Up Over Ad Monopoly Accusations

Published

on

US Government Wants Google Split Up Over Ad Monopoly Accusations

The U.S. Department of Justice (DOJ) and eight states have filed charges against Google, accusing the company of exerting monopoly power over the buying and selling of online advertisements (ads).

U.S. Attorney General Merrick Garland filed the 155-page complaint in a Virginia federal court on Jan. 24. He alleged that Google “used anticompetitive, exclusionary and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital ads technologies.”

Also read: How Google Is Joining the Text-to-Image AI Race

The lawsuit wants the court to press Google to break up part of its advertising business. Website publishers in the U.S. sell over $5 trillion digital display advertisements on the open web each year, or about $13 billion ads per day.

Google gets much of its revenue from online ads. The tech behemoth dominates the search and digital ad business. According to market research firm Insider Intelligence, Google’s share of total online advertising income in the U.S. fell from 37% in 2016 to 29% in 2022.

Breaking up Google

The complaint focuses on the way Google controls the tech for online ads that get placed on websites via its auction process. It accuses the firm of forcing publishers and advertisers to use its tools, corrupting legitimate competition, and manipulating auction outcomes.

“We alleged that Google’s anti-competitive conduct extends to three significant elements of the digital ad buying process,” said Garland.

“First, Google controls that technology used by nearly every major website publisher to offer advertising space for sale. Second, Google controls the leading tool used by advertisers to buy that advertising space. And third, Google controls the largest ad exchange that matches publishers and advertisers together each time that ad space is sold.”

Due to Google’s conduct, said Garland, “website creators earn less and advertisers pay more.” The U.S. goverment alleges that, on average, Google kept at least $0.30 – and sometimes far more – of each dollar flowing from advertisers to website publishers via its ad tech tools.

He cited Google’s own internal documents in which the company “conceded that [it] would earn far less in a competitive market.” Garland intends to “halt Google’s anti-competitive scheme and unwind [its] monopolistic grip on the market.”

To restore competition, the Justice Department wants at the minimum the court to compel Google to dispose of its Ad Manager suite, including both Google’s publisher ad server called DoubleClick for Publishers, or DFP, and its advertising exchange AdX.

Garland revealed that the U.S. government, as an advertiser, has been affected by Google’s behavior. “Monopolies threaten the free and fair markets upon which our economy is based,” he said, adding:

“They stifle innovation. They hurt producers and workers, and they increase costs for consumers. Today’s complaint is only the latest example of the department’s work to challenge antitrust violations that undermine competition and harm the American people.”

Google denies wrongdoing

Google denied any wrongdoing, and instead accused the Department of Justice of “doubling down on a flawed argument”. In a blog post Dan Taylor, vice president of global ads said the complaint would “reverse years of innovation, harming the broader advertising sector”.

“Today’s lawsuit from the DOJ attempts to pick winners and losers…” a Google spokesperson further told the BBC. “It largely duplicates an unfounded lawsuit by the Texas Attorney General, much of which was recently dismissed by a federal court.”

“DOJ is doubling down on a flawed argument that would slow innovation, raise advertising fees, and make it harder for thousands of small businesses and publishers to grow,” added the person.

The latest lawsuit builds on a 2020 Trump-era action launched by the Justice Department and 11 states against Google over its dominance in search ad business. The case is slated for trial in September 2023, but the DOJ maintains it is separate from the new complaint.

US broke up monopolies in the past

The U.S government successfully challenged the AT&T telecom monopoly in 1974, winning the case eight years later in 1982, according to reports. However, there are also examples of lawsuits seeking the same kind of remedy which have also failed.

While the Biden administration committed to enforcing antitrust laws in a 2021 Executive Order, it failed to prevent the acquisition of Imperial Sugar by competitor U.S. Sugar. It also could not stop the UnitedHealth Group merger with Change HealthCare.

As MetaNews previously reported, gamers are trying to to prevent Microsoft’s takeover of Activision Blizzard. And so is the United States’ Federal Trade Commission. The regulator is also attempting to stop Meta, formerly Facebook, from buying virtual reality (VR) company Within Unlimited.

Veteran tech writer and analyst Jessica E. Lessin believes the latest lawsuit against Google “will take years and years to sort out,” as is usually with all cases involving tech regulation.

“And while I tend to think Google is likely to negotiate some concessions to avoid splitting off the business, I find this one tough to read and I think many outcomes are possible. When it comes to this area of display advertising, there are very few other games in town,” she said.

Shares of Alphabet Inc., the parent company of Google, fell more than 3% to $96.10 as of writing. Over the past 52 weeks, the stock touched a high of $152.10 and a low of $83.45, according to Yahoo Finance.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

AI

AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe

Published

on

AI Code of Conduct Coming 'Within Weeks' Says US and Europe

On Wednesday a top EU official said the European Union and United States expect to draft a voluntary code of conduct on artificial intelligence within weeks. The move comes amid concerns about the potential risks of AI on humanity, and as calls for regulation intensify.

European Commission Vice President Margrethe Vestager said that the United States and the European Union should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed.

She was speaking at a meeting of the EU-U.S. Trade and Technology Council (TTC), which is jointly led by American and European officials. Any new rules on AI will not take effect until at least after three years, she said. The code is, therefore, expected to bridge that gap.

Also read: EU Antitrust Chief Steps up Rhetoric on Metaverse, AI Regulation

Game-changing AI technology

“We need accountable artificial intelligence. Generative AI is a complete game changer,” Vestager said after the council’s meeting in Sweden, AP reported.

“Everyone knows this is the next powerful thing. So within the next weeks, we will advance a draft of an AI code of conduct.”

She said officials will gather feedback from companies developing and using AI, and other industry players. Vestager hopes there would be a final proposal “very, very soon for industry to commit to voluntarily.”

US Secretary of State Antony Blinken said he had an “intensive and productive” discussion on AI with his European counterparts at the TTC forum.

“[The council has] an important role to play in helping establish voluntary codes of conduct that would be open to all like-minded countries,” Blinken said.

AI could end human race

The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for AI to create mass unemployment.

As MetaNews previously reported, one of the core issues is what experts described as the “alignment problem.” Essentially, the problem refers to the difficulty of ensuring that an AI system’s goals and objectives are aligned with those of its human creators.

Critics say the danger is that an AI system may develop its own goals and objectives that conflict with those of its creators, leading to disastrous outcomes. On Tuesday, about 350 scientists and experts signed a statement calling for AI regulation to be a global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement stated.

The statement is from San Francisco-based non-profit the Center for AI Safety. It was signed by chief executives from Google DeepMind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research.

In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”. They urged international dialogue on the governance of AI, copyright, transparency, and the threat of disinformation.

According to Vestager, specific agreements, not just general statements, are needed. She suggested that the the 27-nation EU and the US could help drive the process.

“If the two of us take the lead with close friends, I think we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds,” she said.

Worldwide concern

The U.S. and the European Union are not the only jurisdictions working on AI regulation. China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.”

In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.

“There’s a sort of feeling in the community that they want to have the assurance … that the technology isn’t getting ahead of itself and it’s not being used in a way that creates disadvantage or risk for people,” he said, according to local media reports.

“That’s why the [federal government] wants to set up the next reforms that can give people confidence that we are curbing the risks and maximising the benefits.”

Continue Reading

AI

Judge Orders All AI-Generated Research To Be Declared in Court

Published

on

Judge Orders All AI-Generated Research To Be Declared in Court

A Texas federal judge has ordered that AI-generated content should not be used to make arguments in court, and that such information must be declared and verified by a human.

Judge Brantley Starr’s ruling comes after one attorney, Steven Schwartz, last week allowed OpenAI’s ChatGPT to “supplement” his legal research by providing him with six cases and relevant precedent. All the cases were untrue and completely “hallucinated” by the chatbot.

Also read: ChatGPT’s Bogus Citations Land US Lawyer in Hot Water

The debacle received wide coverage, leaving Schwartz with “regrets.” Other lawyers who may have been contemplating trying the stunt now have to think twice, as Judge Starr has put an end to it.

Judge Starr also added a requirement that any attorney who appears in his courtroom declare that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”

Judge Starr lays down the law

The eminent judge has set specific rules for his courtroom, just like other judges, and recently added the Mandatory Certification Regarding Generative Artificial Intelligence.

This states that: “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.”

A form for lawyers to sign is appended, noting that “quotations, citations, paraphrased assertions and legal analysis are all covered by this proscription.”

According to a report by TechCrunch, summary is one of AI’s strong suits and finding and summarizing precedent or previous cases is something advertised as potentially helpful in legal work. As such, this ruling may be a major spanner in the works for AI.

The certification requirement includes a pretty well-informed and convincing explanation of its necessity.

It states that: “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument.

“But legal briefing is not one of them. Here’s why.

“These platforms in their current states are prone to hallucinations and bias,” reads part of the certification.

It further explains that on hallucinations, AI is prone to simply making stuff up – even quotes and citations. While another issue relates to reliability or bias.

Chatbots don’t swear an oath

The certification further notes that although attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative AI is the programming devised by humans who did not have to swear such an oath.

In the case of Schwartz, he said in an affidavit that he was “unaware of the possibility that its (ChatGPT) content could be false.”

He added that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future, further claiming he had never used ChatGPT prior to this case.

The other side of ChatGPT

Launched last November, ChatGPT is a large language model developed by OpenAI. The AI-powered chatbot is trained on billions of data sets from the internet and can perform a variety of tasks such as generating text and translating languages.

Despite going viral and provoking a fierce AI race, ChatGPT has its downsides – it can hallucinate and has misled Schwartz, who was representing Roberto Mata in a lawsuit against Colombian airline Avianca. Effectively, the chatbot provided citations to cases that did not exist.

Yet when Schwartz asked ChatGPT if one of the supposed cases was a real case, it responded “yes, (it) is a real case.” When asked for sources, the chatbot told Schwartz the case could be found “on legal research database such as Westlaw and LexisNexis.”

The matter came to light after the opposing counsel flagged the ChatGPT-generated citations as fake.

US District Court Judge Kevin Castel confirmed six of them as non-existent and demanded an explanation from Schwartz.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” wrote Judge Castel in a May 4 order.

Continue Reading

AI

ChatGPT’s Bogus Citations Land US Lawyer in Hot Water

Published

on

ChatGPT's Bogus Citations Land US Lawyer in Hot Water

A lawyer in the United States is facing disciplinary action after his law firm used popular AI chatbot ChatGPT for legal research and cited fake cases in a lawsuit.

Steven A. Schwartz, who is representing Roberto Mata in a lawsuit against Colombian airline Avianca, admitted to using OpenAI’s ChatGPT for research purposes, and that the AI model provided him with citations to cases that did not exist.

Mata is suing Avianca for a personal injury caused by a serving cart in 2019, claiming negligence by an employee.

Also read: Opera Unveils GPT-Powered AI Chatbot Aria

Bogus all the way

According to a BBC report, the matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support Mata’s case.

But the opposing counsel flagged the ChatGPT-generated citations as fake. US District Court Judge Kevin Castel confirmed six of them as non-existent. He demanded an explanation from Schwartz, an attorney with New York-based law company Levidow, Levidow & Oberman.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in a May 4 order.

“The court is presented with an unprecedented circumstance.”

The supposed cases include: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, none of which did not appear to exist to either the judge or defense.

Lawyer claims ignorance

ChatGPT is a large language model developed by OpenAI. Launched in November, the AI is trained on billions of data from the Internet and can perform a variety of tasks like generate text, translate languages, and even write poetry, and solve difficult math problems.

But ChatGPT is prone to “hallucinations” – tech industry speak for when AI chatbots produce false or misleading information, often with confidence.

In an affidavit last week, Schwartz said he was “unaware of the possibility that its [ChatGPT] content could be false.” He also said that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future.

Schwartz claimed to have never used ChatGPT prior to this case. He said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

The career attorney now faces a court hearing on June 8 after accepting responsibility for not confirming the authenticity of the ChatGPT sources. Schwartz was asked to show cause why he shouldn’t be sanctioned “for the use of a false and fraudulent notarization.”

ChatGPT’s confident lies

According to the BBC report, Schwartz’s affidavit contained screenshots of the attorney that confirmed his chats with ChatGPT.

Schwartz asked the chatbot, “is varghese a real case?”, to which ChatGPT responded “yes, [it] is a real case.” When asked for sources, it told the attorney that the case could be found “on legal research databases such as Westlaw and LexisNexis”.

Again, the attorney asked: “Are the other cases you provided fake?” ChatGPT responded “No”, adding that the cases could be found on other legal databases. “I apologize for the confusion earlier,” ChatGPT said.

“Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused,” the chatbot replied with confidence.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced web3 world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved

Welcome

Install
×