AI March 30, 2023
Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?
As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based.
But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID, a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account.
The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.
Digital identity for web3 and metaverse
According to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse.
“By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews.
He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts.
0/ We're often asked why the @IdentityMaxis are building #NFID on @dfinity's #ICP, especially in this tremendous bear market. Here's how I respond 🧵👇
— Dan Ostrovsky ₿ Ξ ∞ (@danostrovsky) November 14, 2022
Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions.
Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services.
In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.
AI security risks
As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice.
Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers.
To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.
“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.
This may be already a common practice on social platforms like Twitter, he said, adding:
“The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.”
Ostrovsky emphasized the importance of privacy in digital identity.
“Users need to have control over their own data and be able to decide who has access to it,” he said.
It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.
Privacy threat
Put simply, web3 is a decentralized internet powered by blockchain and token-based economics. Non-fungible crypto tokens are expected to play a key role in web3 as a medium of exchange.
Experts are concerned about the risk of fraud and a lack of safeguards on the biometric data collected for creating digital identities in web3. Big tech has in the past failed to secure personal information, and there are concerns about private firms collecting biometric data in a crypto industry that largely operates beyond regulatory oversight.
It is inevitable that comparisons have been drawn with Worldcoin’s ill-fated plan to scan the irises of one billion people in exchange for free money. The project was roundly criticized as a disaster waiting to happen for harvesting people’s biometrics in an unethical way.
Also read: Goldman Sachs Report Warns AI Could Impact 300 Million Jobs
Identity Labs’ Dan Ostrovsky believes that digital identities, as epitomized by his company’s NFID system, will play a crucial role in the development of web3 and largely address such concerns.
As the internet becomes more decentralized and open, these IDs will be used to ensure trust and security in decentralized systems.
Last year, layer two protocol Polygon also launched its Polygon ID, a decentralized and private identity service for web3 and DeFi. Polygon hopes the ID will be a useful tool in determining credit scores in web3 and decentralized finance lending.
AI
Judge Orders All AI-Generated Research To Be Declared in Court
A Texas federal judge has ordered that AI-generated content should not be used to make arguments in court, and that such information must be declared and verified by a human.
Judge Brantley Starr’s ruling comes after one attorney, Steven Schwartz, last week allowed OpenAI’s ChatGPT to “supplement” his legal research by providing him with six cases and relevant precedent. All the cases were untrue and completely “hallucinated” by the chatbot.
Also read: ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
The debacle received wide coverage, leaving Schwartz with “regrets.” Other lawyers who may have been contemplating trying the stunt now have to think twice, as Judge Starr has put an end to it.
Judge Starr also added a requirement that any attorney who appears in his courtroom declare that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”
Judge Starr lays down the law
The eminent judge has set specific rules for his courtroom, just like other judges, and recently added the Mandatory Certification Regarding Generative Artificial Intelligence.
This states that: “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.”
A form for lawyers to sign is appended, noting that “quotations, citations, paraphrased assertions and legal analysis are all covered by this proscription.”
According to a report by TechCrunch, summary is one of AI’s strong suits and finding and summarizing precedent or previous cases is something advertised as potentially helpful in legal work. As such, this ruling may be a major spanner in the works for AI.
The certification requirement includes a pretty well-informed and convincing explanation of its necessity.
It states that: “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument.
“But legal briefing is not one of them. Here’s why.
“These platforms in their current states are prone to hallucinations and bias,” reads part of the certification.
It further explains that on hallucinations, AI is prone to simply making stuff up – even quotes and citations. While another issue relates to reliability or bias.
Chatbots don’t swear an oath
The certification further notes that although attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative AI is the programming devised by humans who did not have to swear such an oath.
In the case of Schwartz, he said in an affidavit that he was “unaware of the possibility that its (ChatGPT) content could be false.”
He added that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future, further claiming he had never used ChatGPT prior to this case.
The other side of ChatGPT
Launched last November, ChatGPT is a large language model developed by OpenAI. The AI-powered chatbot is trained on billions of data sets from the internet and can perform a variety of tasks such as generating text and translating languages.
Despite going viral and provoking a fierce AI race, ChatGPT has its downsides – it can hallucinate and has misled Schwartz, who was representing Roberto Mata in a lawsuit against Colombian airline Avianca. Effectively, the chatbot provided citations to cases that did not exist.
Yet when Schwartz asked ChatGPT if one of the supposed cases was a real case, it responded “yes, (it) is a real case.” When asked for sources, the chatbot told Schwartz the case could be found “on legal research database such as Westlaw and LexisNexis.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The matter came to light after the opposing counsel flagged the ChatGPT-generated citations as fake.
US District Court Judge Kevin Castel confirmed six of them as non-existent and demanded an explanation from Schwartz.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” wrote Judge Castel in a May 4 order.
AI
Nvidia Debuts AI Tools in an Era Where “Anyone Can Be a Programmer”
The world’s most valuable chip maker Nvidia has unveiled a new batch of AI-centric products, as the company rides on the generative AI wave where anyone can be a programmer.
Nvidia announced a new supercomputer and a networking system, while the company also aims to make video game characters more realistic.
The wide range of products include robotics design, gaming capabilities, advertising services, and networking technology, which CEO Jensen Huang unveiled during a two-hour presentation in Taiwan on Monday.
Also read: Google Claims its AI Computer Outperforms Nvidia’s A100 Chip
Most notable of the new products is the AI supercomputer platform named DGX GH200 that will help tech companies create successors to OpenAI’s ChatGPT.
According to the company, the new DGX GH200 supercomputers combine 256 GH200 superchips that can act as a single graphics processing unit (GPU). The result is a system that boasts nearly 500 times the memory of a single Nvidia’s DGX A100 system.
“Generative AI, large language models, and recommender systems are the digital engines of modern economy,” said Huang.
“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI.”
So far, Microsoft Corp., Meta Platforms Inc., and Alphabet’s Google are expected to be among the first users, according to Nvidia.
The DGX GH200 supercomputers are expected to be available by the end of 2023.
The GH200 superchips which power the new supercomputer work by combining Nvidia’s Arm-based Grace GPU and an Nvidia H100 Tensor Core GPU in a single package.
The chipmaker also revealed that it’s building its own supercomputer running four DGX 200 systems at the same time to power its own research.
Nvidia also released its ACE generative AI model for video games, enabling gaming companies to use generative AI for large games with multiple non-player characters, giving them unique lines of dialogue and ways to interact with players that would normally need to be individually programmed.
Easy ad content
Alongside the hardware announcement, the company said it has partnered with advertising giant WPP to create a content engine that uses its Omniverse technology and generative AI capabilities to help build out ad content.
The move is intended to cut down the time and cost of producing ads by enabling WPP’s clients to lean on Nvidia’s technology.
Electronics manufacturers such as Foxconn, Pegatron, and Wistron are using Omniverse technology to create digital twins of their factory floors, so they can get a sense of how best to lay them out before making any physical changes.
A new computing era
Presenting at the forum, Huang acknowledged that advancements in AI are ushering in a new era in computing. He says anyone can be a programmer simply by speaking to the computer.
According to the Nvidia boss, gone are the days when programmers would write lines of code, only for it to display the “fail to compile” response because of a missing semicolon.
“This computer doesn’t care how you program it, it will try to understand what you mean, because it has this incredible large language model capability. And so the programming barrier is incredibly low,” said Huang.
“We have closed the digital divide. Everyone is a programmer. Now, you just have to say something to the computer,” he added.
Huang said his company has managed to bridge the digital gap, and the tech giant will continue to capitalize on the AI frenzy that has made Nvidia one of the world’s most valuable chipmakers.
Nvidia’s stock price is rising
Nvidia’s major announcements came as shares of the tech giant jumped last week on news that the company anticipated second quarter revenue above Wall Street’s expectations, based on the strength of its data center business.
The company hit the $1 trillion market cap just before the US markets opened on Tuesday. Its shares are trading at $407 in the pre-market, nearly 5% up from Monday.
Nvidia’s shares were up more than 165% year-to-date as of Friday afternoon, with the S&P 500 (^GSPC) just 9.5% higher in the same frame.
Rival chip maker AMD has experienced a similar boost in share price, rising 93%. However, Intel (INTC) is lagging behind with shares up just 8%.
According to Yahoo Finance tech editor Daniel Howley, while analysts see Nividia well ahead of its chip rivals in the AI processing space, how long that continues to be the case is anyone’s guess.
AI
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
A lawyer in the United States is facing disciplinary action after his law firm used popular AI chatbot ChatGPT for legal research and cited fake cases in a lawsuit.
Steven A. Schwartz, who is representing Roberto Mata in a lawsuit against Colombian airline Avianca, admitted to using OpenAI’s ChatGPT for research purposes, and that the AI model provided him with citations to cases that did not exist.
Mata is suing Avianca for a personal injury caused by a serving cart in 2019, claiming negligence by an employee.
Also read: Opera Unveils GPT-Powered AI Chatbot Aria
Bogus all the way
According to a BBC report, the matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support Mata’s case.
But the opposing counsel flagged the ChatGPT-generated citations as fake. US District Court Judge Kevin Castel confirmed six of them as non-existent. He demanded an explanation from Schwartz, an attorney with New York-based law company Levidow, Levidow & Oberman.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in a May 4 order.
“The court is presented with an unprecedented circumstance.”
The supposed cases include: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, none of which did not appear to exist to either the judge or defense.
Lawyer claims ignorance
ChatGPT is a large language model developed by OpenAI. Launched in November, the AI is trained on billions of data from the Internet and can perform a variety of tasks like generate text, translate languages, and even write poetry, and solve difficult math problems.
But ChatGPT is prone to “hallucinations” – tech industry speak for when AI chatbots produce false or misleading information, often with confidence.
In an affidavit last week, Schwartz said he was “unaware of the possibility that its [ChatGPT] content could be false.” He also said that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future.
Schwartz claimed to have never used ChatGPT prior to this case. He said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The career attorney now faces a court hearing on June 8 after accepting responsibility for not confirming the authenticity of the ChatGPT sources. Schwartz was asked to show cause why he shouldn’t be sanctioned “for the use of a false and fraudulent notarization.”
ChatGPT’s confident lies
According to the BBC report, Schwartz’s affidavit contained screenshots of the attorney that confirmed his chats with ChatGPT.
Schwartz asked the chatbot, “is varghese a real case?”, to which ChatGPT responded “yes, [it] is a real case.” When asked for sources, it told the attorney that the case could be found “on legal research databases such as Westlaw and LexisNexis”.
Again, the attorney asked: “Are the other cases you provided fake?” ChatGPT responded “No”, adding that the cases could be found on other legal databases. “I apologize for the confusion earlier,” ChatGPT said.
“Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused,” the chatbot replied with confidence.
-
BusinessThu 1 Jun 2023 06:36 GMT
Metaverse Gaming Market Expected to Reach $119.2 Billion by 2028
-
AIWed 31 May 2023 17:45 GMT
Judge Orders All AI-Generated Research To Be Declared in Court
-
CryptocurrenciesWed 31 May 2023 07:03 GMT
Floki Inu (FLOKI) Volumes Surge 300% on China Metaverse Game Plans
-
AITue 30 May 2023 15:07 GMT
Nvidia Debuts AI Tools in an Era Where “Anyone Can Be a Programmer”
-
BusinessTue 30 May 2023 10:43 GMT
Chinese City Pledges $1.42bn to Boost Metaverse Industry Growth
-
AITue 30 May 2023 06:40 GMT
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
-
AIMon 29 May 2023 20:30 GMT
Sandbox Founder Remains Bullish on Metaverse ‘Marathon of Many Sprints’
-
CryptocurrenciesMon 29 May 2023 17:00 GMT
Hong Kong Police Launch Metaverse Platform to Fight Cyber Crime