Regulation December 7, 2022
Campaigners Urge AI Act to Go Much Further
The EU is edging closer to passing an act that will minimize the potential harms of artificial intelligence (AI), but not everyone is happy with the bill in its current form.
Access Now is one body that has expressed concerns. The human rights group believes that the AI Act doesn’t do enough to protect asylum seekers and irregular migrants.
The EU AI Act
The EU act is designed to prevent AI from being used for subliminal messaging, exploiting vulnerabilities in computer systems, and establishing any form of social credit system.
The Czech Republic is the current President of the EU, responsible for guiding the bill through to its finish. Ivan Bartoš, Czechia’s Deputy Prime Minister for Digitalisation hailed the bill in its current form as a “delicate balance between the protection of fundamental rights and the promotion of uptake of AI technology.”
The bill excludes traditional computing systems but includes modern machine learning.
EU wants to regulate AI.
The EU AI Act does not protect the most vulnerable
Access Now, a migrant lobby group that “defend and extend the digital rights of users at risk globally,” argue that the new bill does not do enough to protect everyone. Access Now and some 190+ partners are calling on the EU to ensure the bill also protects irregular migrants and asylum seekers.
“Artificial intelligence tech is being deployed to intimidate, discriminate, and categorise certain groups of people,” said Caterina Rodelli, EU Policy Analyst at Access Now on Tuesday.
“The EU has a responsibility to ensure the fundamental rights of all are upheld inside, outside, and at every border across the Union, and that these new tools are not used to reinforce prejudice and perpetuate oppression of certain groups. The AI Act must be amended now.”
Access Now went on to argue that if the AI Act failed to cover these groups that would fail in its mission to promote “trustworthy AI.”
The AI Act aims to implement an ecosystem of trust by proposing a legal framework for AI-based solutions while encouraging businesses to develop them. Regarding technology, Europe has made no secret of its desire to export its values across the world, at least at a principle level.
AI
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
On Wednesday a top EU official said the European Union and United States expect to draft a voluntary code of conduct on artificial intelligence within weeks. The move comes amid concerns about the potential risks of AI on humanity, and as calls for regulation intensify.
European Commission Vice President Margrethe Vestager said that the United States and the European Union should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed.
She was speaking at a meeting of the EU-U.S. Trade and Technology Council (TTC), which is jointly led by American and European officials. Any new rules on AI will not take effect until at least after three years, she said. The code is, therefore, expected to bridge that gap.
Also read: EU Antitrust Chief Steps up Rhetoric on Metaverse, AI Regulation
Game-changing AI technology
“We need accountable artificial intelligence. Generative AI is a complete game changer,” Vestager said after the council’s meeting in Sweden, AP reported.
“Everyone knows this is the next powerful thing. So within the next weeks, we will advance a draft of an AI code of conduct.”
She said officials will gather feedback from companies developing and using AI, and other industry players. Vestager hopes there would be a final proposal “very, very soon for industry to commit to voluntarily.”
US Secretary of State Antony Blinken said he had an “intensive and productive” discussion on AI with his European counterparts at the TTC forum.
“[The council has] an important role to play in helping establish voluntary codes of conduct that would be open to all like-minded countries,” Blinken said.
AI could end human race
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for AI to create mass unemployment.
As MetaNews previously reported, one of the core issues is what experts described as the “alignment problem.” Essentially, the problem refers to the difficulty of ensuring that an AI system’s goals and objectives are aligned with those of its human creators.
Critics say the danger is that an AI system may develop its own goals and objectives that conflict with those of its creators, leading to disastrous outcomes. On Tuesday, about 350 scientists and experts signed a statement calling for AI regulation to be a global priority.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement stated.
The statement is from San Francisco-based non-profit the Center for AI Safety. It was signed by chief executives from Google DeepMind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research.
In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”. They urged international dialogue on the governance of AI, copyright, transparency, and the threat of disinformation.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
According to Vestager, specific agreements, not just general statements, are needed. She suggested that the the 27-nation EU and the US could help drive the process.
“If the two of us take the lead with close friends, I think we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds,” she said.
Worldwide concern
The U.S. and the European Union are not the only jurisdictions working on AI regulation. China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.”
In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.
“There’s a sort of feeling in the community that they want to have the assurance … that the technology isn’t getting ahead of itself and it’s not being used in a way that creates disadvantage or risk for people,” he said, according to local media reports.
“That’s why the [federal government] wants to set up the next reforms that can give people confidence that we are curbing the risks and maximising the benefits.”
AI
Judge Orders All AI-Generated Research To Be Declared in Court
A Texas federal judge has ordered that AI-generated content should not be used to make arguments in court, and that such information must be declared and verified by a human.
Judge Brantley Starr’s ruling comes after one attorney, Steven Schwartz, last week allowed OpenAI’s ChatGPT to “supplement” his legal research by providing him with six cases and relevant precedent. All the cases were untrue and completely “hallucinated” by the chatbot.
Also read: ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
The debacle received wide coverage, leaving Schwartz with “regrets.” Other lawyers who may have been contemplating trying the stunt now have to think twice, as Judge Starr has put an end to it.
Judge Starr also added a requirement that any attorney who appears in his courtroom declare that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”
Judge Starr lays down the law
The eminent judge has set specific rules for his courtroom, just like other judges, and recently added the Mandatory Certification Regarding Generative Artificial Intelligence.
This states that: “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.”
A form for lawyers to sign is appended, noting that “quotations, citations, paraphrased assertions and legal analysis are all covered by this proscription.”
According to a report by TechCrunch, summary is one of AI’s strong suits and finding and summarizing precedent or previous cases is something advertised as potentially helpful in legal work. As such, this ruling may be a major spanner in the works for AI.
The certification requirement includes a pretty well-informed and convincing explanation of its necessity.
It states that: “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument.
“But legal briefing is not one of them. Here’s why.
“These platforms in their current states are prone to hallucinations and bias,” reads part of the certification.
It further explains that on hallucinations, AI is prone to simply making stuff up – even quotes and citations. While another issue relates to reliability or bias.
Chatbots don’t swear an oath
The certification further notes that although attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative AI is the programming devised by humans who did not have to swear such an oath.
In the case of Schwartz, he said in an affidavit that he was “unaware of the possibility that its (ChatGPT) content could be false.”
He added that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future, further claiming he had never used ChatGPT prior to this case.
The other side of ChatGPT
Launched last November, ChatGPT is a large language model developed by OpenAI. The AI-powered chatbot is trained on billions of data sets from the internet and can perform a variety of tasks such as generating text and translating languages.
Despite going viral and provoking a fierce AI race, ChatGPT has its downsides – it can hallucinate and has misled Schwartz, who was representing Roberto Mata in a lawsuit against Colombian airline Avianca. Effectively, the chatbot provided citations to cases that did not exist.
Yet when Schwartz asked ChatGPT if one of the supposed cases was a real case, it responded “yes, (it) is a real case.” When asked for sources, the chatbot told Schwartz the case could be found “on legal research database such as Westlaw and LexisNexis.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The matter came to light after the opposing counsel flagged the ChatGPT-generated citations as fake.
US District Court Judge Kevin Castel confirmed six of them as non-existent and demanded an explanation from Schwartz.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” wrote Judge Castel in a May 4 order.
AI
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
A lawyer in the United States is facing disciplinary action after his law firm used popular AI chatbot ChatGPT for legal research and cited fake cases in a lawsuit.
Steven A. Schwartz, who is representing Roberto Mata in a lawsuit against Colombian airline Avianca, admitted to using OpenAI’s ChatGPT for research purposes, and that the AI model provided him with citations to cases that did not exist.
Mata is suing Avianca for a personal injury caused by a serving cart in 2019, claiming negligence by an employee.
Also read: Opera Unveils GPT-Powered AI Chatbot Aria
Bogus all the way
According to a BBC report, the matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support Mata’s case.
But the opposing counsel flagged the ChatGPT-generated citations as fake. US District Court Judge Kevin Castel confirmed six of them as non-existent. He demanded an explanation from Schwartz, an attorney with New York-based law company Levidow, Levidow & Oberman.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in a May 4 order.
“The court is presented with an unprecedented circumstance.”
The supposed cases include: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, none of which did not appear to exist to either the judge or defense.
Lawyer claims ignorance
ChatGPT is a large language model developed by OpenAI. Launched in November, the AI is trained on billions of data from the Internet and can perform a variety of tasks like generate text, translate languages, and even write poetry, and solve difficult math problems.
But ChatGPT is prone to “hallucinations” – tech industry speak for when AI chatbots produce false or misleading information, often with confidence.
In an affidavit last week, Schwartz said he was “unaware of the possibility that its [ChatGPT] content could be false.” He also said that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future.
Schwartz claimed to have never used ChatGPT prior to this case. He said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The career attorney now faces a court hearing on June 8 after accepting responsibility for not confirming the authenticity of the ChatGPT sources. Schwartz was asked to show cause why he shouldn’t be sanctioned “for the use of a false and fraudulent notarization.”
ChatGPT’s confident lies
According to the BBC report, Schwartz’s affidavit contained screenshots of the attorney that confirmed his chats with ChatGPT.
Schwartz asked the chatbot, “is varghese a real case?”, to which ChatGPT responded “yes, [it] is a real case.” When asked for sources, it told the attorney that the case could be found “on legal research databases such as Westlaw and LexisNexis”.
Again, the attorney asked: “Are the other cases you provided fake?” ChatGPT responded “No”, adding that the cases could be found on other legal databases. “I apologize for the confusion earlier,” ChatGPT said.
“Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused,” the chatbot replied with confidence.
-
AITue 6 Jun 2023 06:52 GMT
New York Woman ‘Marries’ AI Bot She Created on Replika
-
AIMon 5 Jun 2023 07:25 GMT
Could Sci-Fi Movies Like Terminator Have Shaped Our Fears of AI?
-
AISat 3 Jun 2023 06:45 GMT
Japan Leads the Way by Adapting Copyright Laws to the Rise of AI
-
BusinessFri 2 Jun 2023 14:00 GMT
Twitter Now Worth Only a Third of Musk’s $44B Purchase Price
-
AIFri 2 Jun 2023 09:30 GMT
Metaverse Experiences Must Be Worth Having, Says Stephenson
-
FeaturedFri 2 Jun 2023 07:40 GMT
Mark Zuckerberg Unveils $500 Meta Quest 3 VR Headset
-
AIThu 1 Jun 2023 18:00 GMT
Baidu Is Rolling Out a $145M Venture Capital AI Fund
-
AIThu 1 Jun 2023 13:30 GMT
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe