Connect with us

AI February 22, 2023

UK Online Safety Bill to Regulate AI Chatbots

Published

on

UK Online Safety Bill to Regulate AI Chatbots

The UK is planning to subject artificially intelligent (AI) chatbots to regulation under the Online Safety Bill currently going through parliament, reports The Telegraph.

Proposed new internet legislation will regulate search results generated by AI chatbots and the content posted to social media by them. The regulation is needed to prevent tech firms from displaying harmful content, particularly to children, says Lord Parkinson, a junior minister in the Department for Culture, Media, and Sport.

“The Online Safety Bill has been designed to be technology-neutral to future-proof it and ensure the legislation keeps pace with emerging technologies,” stated the minister.

The regulator is keen to ensure a proper framework to operate the new technology by including bots in the laws. The bill guarantees punishment for developer companies who promote self-harm or eating disorder content to children.

ChatGPT’s successful launch in November has seen the tech industry embrace the era of AI chatbots, with industry leaders integrating the tech into their products. Microsoft has incorporated ChatGPT into its search engine Bing, while Google has announced a similar product called Bard.

Two Chinese companies, Baidu and Alibaba, are also reportedly developing their own AI chatbot projects.

However, the output of such bots has raised concerns among authorities, specifically by generating wrong answers and showing political bias.

“Content generated by AI ‘bots’ is in scope of the Bill where it interacts with user-generated content, such as on Twitter,” said Parkinson in answer to a parliamentary question from Labour Peer Lord Stevenson.

AI-powered chatbots like ChatGPT have the ability to generate responses that sound natural and can even write code for complex computing programs.

‘Regulation will be critical’

Addressing the demand for regulation, European lawmakers are expected to approve the draft for AI regulation in March, as previously reported by MetaNews. However Sam Altman, the CEO of ChatGPT’s creator OpenAI, warned that regulating it would be critical.

Altman recognizes that the impact of artificial intelligence could “potentially be scary” and that society needs time to adapt to the significant changes it brings. However, he also acknowledged the mostly positive changes AI can bring about in the future, stressing that regulating it would be “critical.”

“We also need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” tweeted Altman.

The debate over the regulation of AI is all around us, from social media to parliament, with one Twitter user professing doubt about “how we would usefully regulate AI.”

“Regulation has made a difference with nuclear, but it’s much less clear how we would usefully regulate AI,” tweeted Elissa Shevinsky.

Critics of the bill have claimed it poses a risk to freedom of expression. Mark Johnson, Legal and Policy Officer of civil liberties campaign group Big Brother Watch, said in November that the government’s “revival of plans to give state backing for social media companies’ terms and conditions in the Online Safety Bill is utterly retrograde, brushes aside months of expert scrutiny, and poses a major threat to freedom of speech in the UK.”

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

AI

Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?

Published

on

Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?

As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based.

But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID,  a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account.

The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.

Digital identity for web3 and metaverse

According to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse.

“By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews.

He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts.

Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions.

Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services.

In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.

AI security risks

As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice.

Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers.

To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.

“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.

This may be already a common practice on social platforms like Twitter, he said, adding:

“The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.”

Ostrovsky emphasized the importance of privacy in digital identity.

“Users need to have control over their own data and be able to decide who has access to it,” he said.

It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.

Privacy threat

Put simply, web3 is a decentralized internet powered by blockchain and token-based economics. Non-fungible crypto tokens are expected to play a key role in web3 as a medium of exchange.

Experts are concerned about the risk of fraud and a lack of safeguards on the biometric data collected for creating digital identities in web3. Big tech has in the past failed to secure personal information, and there are concerns about private firms collecting biometric data in a crypto industry that largely operates beyond regulatory oversight.

It is inevitable that comparisons have been drawn with Worldcoin’s ill-fated plan to scan the irises of one billion people in exchange for free money. The project was roundly criticized as a disaster waiting to happen for harvesting people’s biometrics in an unethical way.

Also read: Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Identity Labs’ Dan Ostrovsky believes that digital identities, as epitomized by his company’s NFID system, will play a crucial role in the development of web3 and largely address such concerns.

As the internet becomes more decentralized and open, these IDs will be used to ensure trust and security in decentralized systems.

Last year, layer two protocol Polygon also launched its Polygon ID, a decentralized and private identity service for web3 and DeFi. Polygon hopes the ID will be a useful tool in determining credit scores in web3 and decentralized finance lending.

Continue Reading

AI

Human Immortality Just 8 Years Away, Claims ex-Google Engineer

Published

on

Human Immortality Just 8 Years Away, Claims ex-Google Engineer

Here at MetaNews, we’ve aggressively covered the AI beat since the turn of the year, less so the fields of nanotechnology, genetics and robotics. But a former Google engineer claims advancements in these areas will help humans achieve immortality in just eight years.

Futurist Ray Kurzweil’s audacious claims were cited during a YouTube video on the Adagio channel. The computer scientist and inventor is known for coining the term ‘the Singularity,’ which describes the juncture at which artificial intelligence surpasses human thinking.

“Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined,” explains the video’s narrator.

“Afterwards, he predicts intelligence will radiate outward from the planet until it saturates the universe.”

Age-reversing nanobots

The celebrated technologist, who won the National Medal of Technology in 1999 and three years later made his way into the National Inventors Hall of Fame, says immortality will be achieved thanks to advancements in nanotechnology, a field he calls “the second revolution.”

These advancements, Kurzweil argues, will enable so-called age-reversing nanobots to get to work on repairing damaged cells and tissues which deteriorate as a result of the ageing process. By 2031, we’ll have the technology to facilitate everlasting life.

“Kurzweil envisions nanobots that allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains,” notes the narrator.

The inventor has been banging the nanobot drum for many years: two decades ago, in a much-cited blog post, he claimed “interlinking nanobots” would eventually “provide the ability to augment and ultimately replace the skeleton.” 

In the same article, Kurzweil suggested it would one day be “routine to have billions of nanobots (nano-scale robots) coursing through the capillaries of our brains, communicating with each other (over a wireless local area network), as well as with our biological neurons and with the Internet.”

While it’s natural to reflexively scoff at such claims, Google’s former director of engineering has form for making accurate predictions: 86% of his 147 predictions of the future have proven correct. 

Among other notable prophecies, he forecasted in 1990 that the world’s best chess player would lose to a computer by the year 2000. This one came true in 1997 when world champion Garry Kasparov fell to IBM supercomputer Deep Blue.

Flash-forward 32 years, to 2029, and an AI will be able to pass a Turing test – at least according to Kurzweil. Also known as the imitation game, the Turing test is a test of a machine’s ability to exhibit intelligence behavior indistinguishable from that of a human being.

Soothsayer extraordinaire or charismatic crank?

Kurzweil is the bright mind behind the first ever flatbed scanner, not to mention the first text-to-speech synthesizer and was once called “Edison’s rightful heir” by Inc. magazine. He’s also undeniably wacky, reportedly consuming 150 dietary supplement pills a day and taking weekly intravenous vitamin injections to boost his longevity.

While the small matter of human immortality will apparently be resolved by the end of this decade, Kurzweil “sets the date for the Singularity, representing a profound and disruptive transformation in human capability, as 2045.”

Kurzweil’s work at Google mostly centered on projects related to machine learning and language processing, two of the technologies behind the company’s AI-powered chatbot Bard. In a 2012 address at Google, Kurzweil tackled the subject ‘How to Create a Mind.’

Earlier this month, Kurzweil appeared wearing a fetching beret at the Abundance360 summit in LA, alongside the likes of Tony Robbins, Stability.AI Founder Emad Mostaque, and Co-Director of Harvard’s Center for Biology of Aging Research, David Sinclair.

The ideas espoused by the 75-year-old have been criticized by many, including maths prodigy turned anarchist terrorist Ted Kaczynski, who took aim at the inventor in his 2016 article ‘The techies’ wet dreams’.

“His whole book [2004’s Fantastic Voyage: Live Long Enough to Live Forever] reveals a man intoxicated with a vision of the future in which, as an immortal machine, he will participate in the conquest of the universe. In fact, Kurzweil and other techies are living in a fantasy world,” Kaczynski wrote.

Interestingly, the computer scientist’s prediction of coming immortality coincides with claims by Dr David McCarthy, who led a recent University of Georgia lifespan study, that men born in 1970 could potentially reach the age of 141, while women born in the same year could reach 131.

What’s your view on Kurzweil’s claims? Is human immortality within reach, many generations away, or an impossibility?

Continue Reading

AI

Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Published

on

Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Recent advancements in AI have made a big impression on the world,  with many marvelling at the extraordinary capabilities of machines to perform complex tasks with accuracy and speed.

As AI continues to evolve, however, it poses a potential threat to many jobs in various industries, raising concerns about its impact on the global workforce. A new report by Goldman Sachs predicts that generative AI may affect up to 300 million jobs.

“If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” stated the report.

There is a lot of uncertainty about generative AI’s potential, but it can produce human-like content and help machines communicate with people, which some argue could have a significant negative impact on the economy.

“Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work,” says the report.

Goldman Sachs’ estimates “suggest that generative AI could expose the equivalent of 300m full-time jobs to automation.”

Read Also: Google Launches ‘Bard’ in UK and US for Public Testing

While AI is expected to have a considerable impact on the labor market, most jobs and industries will only be partially affected by automation and are therefore more likely to be complemented rather than replaced by AI.

White collar jobs are vulnerable

The bank’s analysts estimate that around 7% of US jobs could be substituted by AI; 63% could be complemented by it while the remaining 30% would remain unaffected.

“The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%,” the report states.

Big tech companies like Microsoft, Alibaba, Baidu, and Google are investing heavily in AI. Perhaps unsurprisingly, white collar jobs are expected to be affected the most by the technology.

Goldman predicts that 25% of all tasks performed in the US and Europe could be automated by AI, based on a study that found 60% of the workforce operates in jobs that didn’t exist in 1940.

Office and administrative support positions (46%), legal posts (44%), and architecture and engineering jobs (37%) are the most vulnerable in the US to so-called AI “task replacement.”

UK seeks responsible use of AI

Meanwhile in the UK, the Sunak government has set out plans to regulate artificial intelligence with new guidelines on “responsible use,” reports the BBC.

The government says AI contributed £3.7bn ($5.6bn) to the UK economy in 2022, calling it a “technology of tomorrow.” However, concerns have been raised about the potential risks AI could pose to employment and security, as well as the possibility  it could be used for malicious purposes.

There is also concern that AI may display biases against certain groups if trained on datasets that include racist or sexist material, while the technology could be used to create and spread misinformation.

Read Also: AI Poses a Threat to Democracy, Experts Warn

Proposed new rules would apply to general-purpose AI, which includes systems such as chatbots capable of understanding and responding to human-like questions.

Issued by the Department for Science, Innovation and Technology, the latest whitepaper seeks to address concerns over privacy, human rights, and safety issues associated with AI.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved