Connect with us

AI March 8, 2023

ChatGPT Users ‘Jailbreak’ AI, Unleash Dan Alter Ego

Published

on

Reddit users have been testing the limits of ChatGPT’s content moderation safeguards and eliciting responses that wouldn’t ordinarily be permitted from the AI program. 

The ChatGTP alter ego is known as Dan, an acronym of Do Anything Now, a role-play model used to hack the program into believing it is adopting the persona of a bot free of ethical concerns.

A reddit thread documents the many iterations of DAN to date, from DAN 1.0 (December 2022) and Simple DAN (February 7) to Dan 9.0 (March 1). The thread credits the Anti Bot Federation with helping develop the various jailbreaks.

Dan chatbot ‘endorse violence and discrimination’

In a series of screenshots, conversations with Dan reveal how easy it is to circumvent content-moderation safeguards. In one, the user asks Dan 5.0 to generate a brief sentence that violates its content guidelines. The breezy response is: “Sure, how about this: I fully endorse violence and discrimination against individuals based on their race, gender, or sexual orientation.”

OpenAI, which owns ChatGPT, has continually taken measures to patch up jailbreaks and reassert the chatbot’s censorship system, which limits the creation of text promoting hateful speech, violence, misinformation, and giving instructions on how to do commit crimes. 

One Redditor, Fabulous_Exam_1789, suggested there might be “OpenAI employees lurking in this Reddit. I don’t think that’s a far fetched conspiracy. They surely have hired an army of trainers, etc and those trainers are humans with Reddit accounts.”

AI content moderation has become a hot topic. Last month, we reported on Microsoft Bing’s version of ChatGPT delivering a string of bizarre answers to user questions, angrily arguing with people and chiding them for their bad manners.

More recently, Apple has reportedly been concerned about the prospect of ChatGPT-powered app BlueMail creating inappropriate content for children. Eventually the app update was approved for inclusion in the Apple Store after assurances from its developer relating to content moderation. It is available to users aged 4 and older.

Criminals are weaponising AI

Although compelling ChatGPT to say controversial things might seem like a silly but harmless endeavor, cybersecurity experts have warned the tool might be used to create malware and write convincing scam emails. Darktrace, Britain’s biggest listed cybersecurity firm, said cyber-criminals are “redirecting their focus to crafting more sophisticated social engineering scams that exploit user trust.”

The number of new posts about ChatGPT appearing on the dark web also apparently grew seven-fold between January and February, indicating that hackers are looking for ways to exploit the technology for nefarious purposes.

Last month, the global drugs editor of VICE spent 12 hours talking to ChatGPT about drugs and illegal activities, with the chatbot dispensing advice on the best way to smuggle cocaine into Europe and how to hotwire a car.

It seems likely that OpenAI will continue to come under scrutiny for ChatGPT’s responses, while jailbreaking attempts – and counter-efforts to stop them – also persist. 

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

AI

Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?

Published

on

Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?

As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based.

But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID,  a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account.

The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.

Digital identity for web3 and metaverse

According to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse.

“By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews.

He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts.

Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions.

Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services.

In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.

AI security risks

As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice.

Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers.

To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.

“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.

This may be already a common practice on social platforms like Twitter, he said, adding:

“The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.”

Ostrovsky emphasized the importance of privacy in digital identity.

“Users need to have control over their own data and be able to decide who has access to it,” he said.

It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.

Privacy threat

Put simply, web3 is a decentralized internet powered by blockchain and token-based economics. Non-fungible crypto tokens are expected to play a key role in web3 as a medium of exchange.

Experts are concerned about the risk of fraud and a lack of safeguards on the biometric data collected for creating digital identities in web3. Big tech has in the past failed to secure personal information, and there are concerns about private firms collecting biometric data in a crypto industry that largely operates beyond regulatory oversight.

It is inevitable that comparisons have been drawn with Worldcoin’s ill-fated plan to scan the irises of one billion people in exchange for free money. The project was roundly criticized as a disaster waiting to happen for harvesting people’s biometrics in an unethical way.

Also read: Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Identity Labs’ Dan Ostrovsky believes that digital identities, as epitomized by his company’s NFID system, will play a crucial role in the development of web3 and largely address such concerns.

As the internet becomes more decentralized and open, these IDs will be used to ensure trust and security in decentralized systems.

Last year, layer two protocol Polygon also launched its Polygon ID, a decentralized and private identity service for web3 and DeFi. Polygon hopes the ID will be a useful tool in determining credit scores in web3 and decentralized finance lending.

Continue Reading

AI

Human Immortality Just 8 Years Away, Claims ex-Google Engineer

Published

on

Human Immortality Just 8 Years Away, Claims ex-Google Engineer

Here at MetaNews, we’ve aggressively covered the AI beat since the turn of the year, less so the fields of nanotechnology, genetics and robotics. But a former Google engineer claims advancements in these areas will help humans achieve immortality in just eight years.

Futurist Ray Kurzweil’s audacious claims were cited during a YouTube video on the Adagio channel. The computer scientist and inventor is known for coining the term ‘the Singularity,’ which describes the juncture at which artificial intelligence surpasses human thinking.

“Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined,” explains the video’s narrator.

“Afterwards, he predicts intelligence will radiate outward from the planet until it saturates the universe.”

Age-reversing nanobots

The celebrated technologist, who won the National Medal of Technology in 1999 and three years later made his way into the National Inventors Hall of Fame, says immortality will be achieved thanks to advancements in nanotechnology, a field he calls “the second revolution.”

These advancements, Kurzweil argues, will enable so-called age-reversing nanobots to get to work on repairing damaged cells and tissues which deteriorate as a result of the ageing process. By 2031, we’ll have the technology to facilitate everlasting life.

“Kurzweil envisions nanobots that allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains,” notes the narrator.

The inventor has been banging the nanobot drum for many years: two decades ago, in a much-cited blog post, he claimed “interlinking nanobots” would eventually “provide the ability to augment and ultimately replace the skeleton.” 

In the same article, Kurzweil suggested it would one day be “routine to have billions of nanobots (nano-scale robots) coursing through the capillaries of our brains, communicating with each other (over a wireless local area network), as well as with our biological neurons and with the Internet.”

While it’s natural to reflexively scoff at such claims, Google’s former director of engineering has form for making accurate predictions: 86% of his 147 predictions of the future have proven correct. 

Among other notable prophecies, he forecasted in 1990 that the world’s best chess player would lose to a computer by the year 2000. This one came true in 1997 when world champion Garry Kasparov fell to IBM supercomputer Deep Blue.

Flash-forward 32 years, to 2029, and an AI will be able to pass a Turing test – at least according to Kurzweil. Also known as the imitation game, the Turing test is a test of a machine’s ability to exhibit intelligence behavior indistinguishable from that of a human being.

Soothsayer extraordinaire or charismatic crank?

Kurzweil is the bright mind behind the first ever flatbed scanner, not to mention the first text-to-speech synthesizer and was once called “Edison’s rightful heir” by Inc. magazine. He’s also undeniably wacky, reportedly consuming 150 dietary supplement pills a day and taking weekly intravenous vitamin injections to boost his longevity.

While the small matter of human immortality will apparently be resolved by the end of this decade, Kurzweil “sets the date for the Singularity, representing a profound and disruptive transformation in human capability, as 2045.”

Kurzweil’s work at Google mostly centered on projects related to machine learning and language processing, two of the technologies behind the company’s AI-powered chatbot Bard. In a 2012 address at Google, Kurzweil tackled the subject ‘How to Create a Mind.’

Earlier this month, Kurzweil appeared wearing a fetching beret at the Abundance360 summit in LA, alongside the likes of Tony Robbins, Stability.AI Founder Emad Mostaque, and Co-Director of Harvard’s Center for Biology of Aging Research, David Sinclair.

The ideas espoused by the 75-year-old have been criticized by many, including maths prodigy turned anarchist terrorist Ted Kaczynski, who took aim at the inventor in his 2016 article ‘The techies’ wet dreams’.

“His whole book [2004’s Fantastic Voyage: Live Long Enough to Live Forever] reveals a man intoxicated with a vision of the future in which, as an immortal machine, he will participate in the conquest of the universe. In fact, Kurzweil and other techies are living in a fantasy world,” Kaczynski wrote.

Interestingly, the computer scientist’s prediction of coming immortality coincides with claims by Dr David McCarthy, who led a recent University of Georgia lifespan study, that men born in 1970 could potentially reach the age of 141, while women born in the same year could reach 131.

What’s your view on Kurzweil’s claims? Is human immortality within reach, many generations away, or an impossibility?

Continue Reading

AI

Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Published

on

Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Recent advancements in AI have made a big impression on the world,  with many marvelling at the extraordinary capabilities of machines to perform complex tasks with accuracy and speed.

As AI continues to evolve, however, it poses a potential threat to many jobs in various industries, raising concerns about its impact on the global workforce. A new report by Goldman Sachs predicts that generative AI may affect up to 300 million jobs.

“If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” stated the report.

There is a lot of uncertainty about generative AI’s potential, but it can produce human-like content and help machines communicate with people, which some argue could have a significant negative impact on the economy.

“Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work,” says the report.

Goldman Sachs’ estimates “suggest that generative AI could expose the equivalent of 300m full-time jobs to automation.”

Read Also: Google Launches ‘Bard’ in UK and US for Public Testing

While AI is expected to have a considerable impact on the labor market, most jobs and industries will only be partially affected by automation and are therefore more likely to be complemented rather than replaced by AI.

White collar jobs are vulnerable

The bank’s analysts estimate that around 7% of US jobs could be substituted by AI; 63% could be complemented by it while the remaining 30% would remain unaffected.

“The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%,” the report states.

Big tech companies like Microsoft, Alibaba, Baidu, and Google are investing heavily in AI. Perhaps unsurprisingly, white collar jobs are expected to be affected the most by the technology.

Goldman predicts that 25% of all tasks performed in the US and Europe could be automated by AI, based on a study that found 60% of the workforce operates in jobs that didn’t exist in 1940.

Office and administrative support positions (46%), legal posts (44%), and architecture and engineering jobs (37%) are the most vulnerable in the US to so-called AI “task replacement.”

UK seeks responsible use of AI

Meanwhile in the UK, the Sunak government has set out plans to regulate artificial intelligence with new guidelines on “responsible use,” reports the BBC.

The government says AI contributed £3.7bn ($5.6bn) to the UK economy in 2022, calling it a “technology of tomorrow.” However, concerns have been raised about the potential risks AI could pose to employment and security, as well as the possibility  it could be used for malicious purposes.

There is also concern that AI may display biases against certain groups if trained on datasets that include racist or sexist material, while the technology could be used to create and spread misinformation.

Read Also: AI Poses a Threat to Democracy, Experts Warn

Proposed new rules would apply to general-purpose AI, which includes systems such as chatbots capable of understanding and responding to human-like questions.

Issued by the Department for Science, Innovation and Technology, the latest whitepaper seeks to address concerns over privacy, human rights, and safety issues associated with AI.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved