AI March 14, 2023
ChatGPT is Being Used to Make ‘Quality Scams’
Scams on the internet might get a lot more dangerous now, thanks to fraudsters having unobstructed access to ChatGPT, Techradar reports.
The widely popular AI-powered chatbot, ChatGPT, is continuously to make headlines. With its ability to write everything from debugging iframe code to complex computer programming code, ChatGPT has established AI as the year’s tech buzzword.
Despite its huge popularity and engagement, ChatGPT has occasioned concerns about ethics and regulation.
Also read: ChatGPT Users ‘Jailbreak’ AI, Unleash Dan Alter Ego
A recent report published by cybersecurity researchers at Norton Labs laid out three key ways threat actors could abuse ChatGPT. The report indicated ChatGPT could be abused to make internet scams in a more effective way: through deep-fake content generation, phishing creation, and malware creation.
“Norton Labs is anticipating scammers are also eyeing the capabilities of large language models and testing ways to enhance their cybercrimes to make them more realistic and believable,” stated the report.
The tool’s capacity to produce “high-quality misinformation or disinformation on a large scale” could aid bot farms in intensifying discord more effectively. This could enable malicious actors to effortlessly “instil doubt and manipulate narratives in multiple languages,” according to Norton.
Highly convincing ‘misinformation’
Writing business plans, strategies, and company descriptions in a convincing way is child’s play for ChatGPT. However, this potential also heightens the risks of misinformation, which may turn into a scam.
“Not only is the content generated by ChatGPT sometimes unintentionally incorrect, but a bad actor can also use these tools to intentionally create content used to harm people in some way,” stated the report.
Its ability to generate “high-quality misinformation or disinformation at scale could lead to mistrust and shape narratives in different languages.”
Writing reviews of products has become increasingly easy with ChatGPT, which cannot be tracked because it generates individual responses with the same provided information each time. In spite of its ability, it poses the challenge of “spotting fake reviews and shoddy products.”
Worryingly, the tool might also be used for bullying.
“Using these tools in harassment campaigns on social media to silence or bully people is also a possible outcome that would have a chilling effect on speech,” the report notes.
ChatGPT in phishing campaigns
ChatGPT is particularly good at generating human-sounding text in different languages, with readers left none the wiser as to whether the text was produced by AI or human. Even OpenAI, the developer of ChatGPT, is not able to identify if a text was written by AI, stating that “it is impossible to reliably detect all AI-written text.”
The prospect of ChatGPT being used in phishing campaigns is a real one.
“Malicious actors can use ChatGPT to craft phishing emails or social media posts that appear to be from legitimate sources, making it more difficult to detect and defend against these types of threats,” stated the report.
As its popularity increases, a probable corollary is an increase in the number of “phishing campaigns and their sophistication.”
The report suggested that “malicious actors can feed ChatGPT with real life examples of non-malicious messages from the companies they want to impersonate and order the AI to create new ones based on the same style with malicious intent.”
Such campaigns could prove highly successful in deceiving individuals into disclosing personal information or sending money to criminal entities. Norton Labs advised consumers to be cautious when “clicking on links or providing personal information.”
ChatGPT can create malware
Generating code and adapting different programming languages is simply part and parcel of ChatGPT’s services. So it’s little wonder fraudsters are using it to generate malware.
“With the right prompt, novice malware authors can describe what they want to do and get working code snippets,” according to the report. This poses a serious threat of malware attacks sufficiently advanced to wreak havoc.
“One example is to generate code to detect when a bitcoin wallet address is copied to the clipboard so that it can be replaced with a malicious address controlled by the malware author,” explained the report.
As a result, the availability of such a chatbot will cause an increase in the sophistication of malware.
Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?
As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based.
But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID, a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account.
The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.
Digital identity for web3 and metaverse
According to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse.
“By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews.
He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts.
0/ We're often asked why the @IdentityMaxis are building #NFID on @dfinity's #ICP, especially in this tremendous bear market. Here's how I respond 🧵👇
— Dan Ostrovsky ₿ Ξ ∞ (@danostrovsky) November 14, 2022
Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions.
Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services.
In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.
AI security risks
As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice.
Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers.
To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.
“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.
This may be already a common practice on social platforms like Twitter, he said, adding:
“The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.”
Ostrovsky emphasized the importance of privacy in digital identity.
“Users need to have control over their own data and be able to decide who has access to it,” he said.
It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.
Put simply, web3 is a decentralized internet powered by blockchain and token-based economics. Non-fungible crypto tokens are expected to play a key role in web3 as a medium of exchange.
Experts are concerned about the risk of fraud and a lack of safeguards on the biometric data collected for creating digital identities in web3. Big tech has in the past failed to secure personal information, and there are concerns about private firms collecting biometric data in a crypto industry that largely operates beyond regulatory oversight.
It is inevitable that comparisons have been drawn with Worldcoin’s ill-fated plan to scan the irises of one billion people in exchange for free money. The project was roundly criticized as a disaster waiting to happen for harvesting people’s biometrics in an unethical way.
Also read: Goldman Sachs Report Warns AI Could Impact 300 Million Jobs
Identity Labs’ Dan Ostrovsky believes that digital identities, as epitomized by his company’s NFID system, will play a crucial role in the development of web3 and largely address such concerns.
As the internet becomes more decentralized and open, these IDs will be used to ensure trust and security in decentralized systems.
Last year, layer two protocol Polygon also launched its Polygon ID, a decentralized and private identity service for web3 and DeFi. Polygon hopes the ID will be a useful tool in determining credit scores in web3 and decentralized finance lending.
Human Immortality Just 8 Years Away, Claims ex-Google Engineer
Here at MetaNews, we’ve aggressively covered the AI beat since the turn of the year, less so the fields of nanotechnology, genetics and robotics. But a former Google engineer claims advancements in these areas will help humans achieve immortality in just eight years.
Futurist Ray Kurzweil’s audacious claims were cited during a YouTube video on the Adagio channel. The computer scientist and inventor is known for coining the term ‘the Singularity,’ which describes the juncture at which artificial intelligence surpasses human thinking.
“Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined,” explains the video’s narrator.
“Afterwards, he predicts intelligence will radiate outward from the planet until it saturates the universe.”
The celebrated technologist, who won the National Medal of Technology in 1999 and three years later made his way into the National Inventors Hall of Fame, says immortality will be achieved thanks to advancements in nanotechnology, a field he calls “the second revolution.”
These advancements, Kurzweil argues, will enable so-called age-reversing nanobots to get to work on repairing damaged cells and tissues which deteriorate as a result of the ageing process. By 2031, we’ll have the technology to facilitate everlasting life.
“Kurzweil envisions nanobots that allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains,” notes the narrator.
The inventor has been banging the nanobot drum for many years: two decades ago, in a much-cited blog post, he claimed “interlinking nanobots” would eventually “provide the ability to augment and ultimately replace the skeleton.”
In the same article, Kurzweil suggested it would one day be “routine to have billions of nanobots (nano-scale robots) coursing through the capillaries of our brains, communicating with each other (over a wireless local area network), as well as with our biological neurons and with the Internet.”
While it’s natural to reflexively scoff at such claims, Google’s former director of engineering has form for making accurate predictions: 86% of his 147 predictions of the future have proven correct.
Among other notable prophecies, he forecasted in 1990 that the world’s best chess player would lose to a computer by the year 2000. This one came true in 1997 when world champion Garry Kasparov fell to IBM supercomputer Deep Blue.
Former Google engineer, Ray Kurzweil, predicts that, within eight years, nanobots will help humans achieve total immortality. In response, France has now raised the retirement age to 212. pic.twitter.com/1o0ryIzl3w
— Jarrett Bellini (@JarrettBellini) March 29, 2023
Flash-forward 32 years, to 2029, and an AI will be able to pass a Turing test – at least according to Kurzweil. Also known as the imitation game, the Turing test is a test of a machine’s ability to exhibit intelligence behavior indistinguishable from that of a human being.
Soothsayer extraordinaire or charismatic crank?
Kurzweil is the bright mind behind the first ever flatbed scanner, not to mention the first text-to-speech synthesizer and was once called “Edison’s rightful heir” by Inc. magazine. He’s also undeniably wacky, reportedly consuming 150 dietary supplement pills a day and taking weekly intravenous vitamin injections to boost his longevity.
While the small matter of human immortality will apparently be resolved by the end of this decade, Kurzweil “sets the date for the Singularity, representing a profound and disruptive transformation in human capability, as 2045.”
Kurzweil’s work at Google mostly centered on projects related to machine learning and language processing, two of the technologies behind the company’s AI-powered chatbot Bard. In a 2012 address at Google, Kurzweil tackled the subject ‘How to Create a Mind.’
Earlier this month, Kurzweil appeared wearing a fetching beret at the Abundance360 summit in LA, alongside the likes of Tony Robbins, Stability.AI Founder Emad Mostaque, and Co-Director of Harvard’s Center for Biology of Aging Research, David Sinclair.
Ray Kurzweil believes we’ll have simulated biology solve all disease by 2029 pic.twitter.com/CWTC6qoFqr
— Michael Antonov (@_MichaelAntonov) March 23, 2023
The ideas espoused by the 75-year-old have been criticized by many, including maths prodigy turned anarchist terrorist Ted Kaczynski, who took aim at the inventor in his 2016 article ‘The techies’ wet dreams’.
“His whole book [2004’s Fantastic Voyage: Live Long Enough to Live Forever] reveals a man intoxicated with a vision of the future in which, as an immortal machine, he will participate in the conquest of the universe. In fact, Kurzweil and other techies are living in a fantasy world,” Kaczynski wrote.
Interestingly, the computer scientist’s prediction of coming immortality coincides with claims by Dr David McCarthy, who led a recent University of Georgia lifespan study, that men born in 1970 could potentially reach the age of 141, while women born in the same year could reach 131.
What’s your view on Kurzweil’s claims? Is human immortality within reach, many generations away, or an impossibility?
Goldman Sachs Report Warns AI Could Impact 300 Million Jobs
Recent advancements in AI have made a big impression on the world, with many marvelling at the extraordinary capabilities of machines to perform complex tasks with accuracy and speed.
As AI continues to evolve, however, it poses a potential threat to many jobs in various industries, raising concerns about its impact on the global workforce. A new report by Goldman Sachs predicts that generative AI may affect up to 300 million jobs.
Generative AI systems such as #ChatGPT to affect 300 million jobs in the US and Europe. Lawyers and administrative staff would be among those at greatest risk of becoming redundant, according to Goldman Sachs.#GPT pic.twitter.com/naMDSPVng3
— Ravi Prakash Kumar (@RaviPksThakur) March 27, 2023
“If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” stated the report.
There is a lot of uncertainty about generative AI’s potential, but it can produce human-like content and help machines communicate with people, which some argue could have a significant negative impact on the economy.
“Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work,” says the report.
Goldman Sachs’ estimates “suggest that generative AI could expose the equivalent of 300m full-time jobs to automation.”
Read Also: Google Launches ‘Bard’ in UK and US for Public Testing
While AI is expected to have a considerable impact on the labor market, most jobs and industries will only be partially affected by automation and are therefore more likely to be complemented rather than replaced by AI.
White collar jobs are vulnerable
The bank’s analysts estimate that around 7% of US jobs could be substituted by AI; 63% could be complemented by it while the remaining 30% would remain unaffected.
“The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%,” the report states.
Big tech companies like Microsoft, Alibaba, Baidu, and Google are investing heavily in AI. Perhaps unsurprisingly, white collar jobs are expected to be affected the most by the technology.
Goldman predicts that 25% of all tasks performed in the US and Europe could be automated by AI, based on a study that found 60% of the workforce operates in jobs that didn’t exist in 1940.
Office and administrative support positions (46%), legal posts (44%), and architecture and engineering jobs (37%) are the most vulnerable in the US to so-called AI “task replacement.”
UK seeks responsible use of AI
Meanwhile in the UK, the Sunak government has set out plans to regulate artificial intelligence with new guidelines on “responsible use,” reports the BBC.
The government says AI contributed £3.7bn ($5.6bn) to the UK economy in 2022, calling it a “technology of tomorrow.” However, concerns have been raised about the potential risks AI could pose to employment and security, as well as the possibility it could be used for malicious purposes.
There is also concern that AI may display biases against certain groups if trained on datasets that include racist or sexist material, while the technology could be used to create and spread misinformation.
Read Also: AI Poses a Threat to Democracy, Experts Warn
Proposed new rules would apply to general-purpose AI, which includes systems such as chatbots capable of understanding and responding to human-like questions.
Issued by the Department for Science, Innovation and Technology, the latest whitepaper seeks to address concerns over privacy, human rights, and safety issues associated with AI.
ChatGPT AI Web Plugins to Be Rolled Out Gradually Due to Alignment Challenges
Europol Sounds Alarm on ChatGPT in New Report
Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead
Google Launches ‘Bard’ in UK and US for Public Testing
HustleGPT: How To Build a Business With GPT-4 as Co-founder
AI Poses a Threat to Democracy, Experts Warn
Disney Dismisses Metaverse Division, Polygon Feels the Heat
AIThu 30 Mar 2023 15:00 GMT
Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?
AIThu 30 Mar 2023 11:59 GMT
Human Immortality Just 8 Years Away, Claims ex-Google Engineer
AIThu 30 Mar 2023 08:59 GMT
Goldman Sachs Report Warns AI Could Impact 300 Million Jobs
GovernanceWed 29 Mar 2023 22:50 GMT
US VPN Users Face 20-Year Prison Terms Under TikTok Bill
CryptocurrenciesWed 29 Mar 2023 20:20 GMT
The Sandbox Unites with Ledger to Boost Metaverse Security
AIWed 29 Mar 2023 15:15 GMT
AI-Based Cybersecurity Assistant Unveiled by Microsoft
AIWed 29 Mar 2023 12:45 GMT
Everyone Can Now Use Runway’s Gen-1 Text-to-Video AI Tool: Here’s How
AIWed 29 Mar 2023 09:18 GMT
Baidu Changes Direction, Showcases Ernie Chatbot to Select Firms