Connect with us

AI January 17, 2023

ChatGPT Cheating Panic Is Unwarranted Argues University Professor



Could A ChatGPT-Enhanced Bing Kill Google?

The threat of students using ChatGPT to write essays and cheat on coursework assignments is overblown according to level-headed educators.

Danny Oppenheimer, professor of psychology and decision sciences at Carnegie Mellon University, is among the academics who believe that the panic surrounding chatbots such as ChatGPT, Claude, and YouChat is not entirely warranted.

According to Oppenheimer the concerns of other academics, “are neglecting a key fact: we’ve never been able to ensure academic integrity.”

Hysteria on ChatGPT

Since the launch of OpenAI’s ChatGPT hysteria has been mounting on the potential impacts of ‘AI plagiarism,’ and the ability of schools, colleges and universities to deal with the threat.

The prevailing narrative is one of fear, as educators grapple with the reality of AI-generated content. The threat even prompted one Standford Univesity student to create GTPZero, an AI designed to detect the handiwork of other chatbots. 

Although such tools may prove useful in future, questions remain about their efficacy and reliability today. Educators cannot currently rely on AI to detect AI. 

Writing in Times Higher Education professor Oppenheimer explained why AI intervention isn’t the existential threat it may first appear to be. 

As Oppenheimer said on Tuesday, “students could always hire others to take remote exams for them. Fraternities and sororities in the US have exam banks and answer keys for previous years’ exams stretching back decades, allowing for easy cheating on tests set by professors who reuse test questions or use assessment materials from textbook companies. Software that prevents computers accessing the web while students are taking an exam can easily be thwarted with a second computer, tablet or phone.” 

As Oppenheimer sees it chatbots do make cheating easier, but they don’t significantly change the academic landscape. The problem chatbots pose is nothing new.

Mitigating the risks

A body of research indicates that the best way to reduce cheating is to reduce the motivational factors that lead to cheating. Oppenheimer cites a study by Donald McCabe which found that the most important determining factor for whether cheating occured, was students’ perception of whether other students were cheating.

Follow up investigations demonstrated that properly conveying the importance of academic integrity helped to curb dishonesty in the educational process. 

“The best ways of thwarting cheating have never been focused on policing and enforcement; they have been about integrity training, creating a healthy campus culture and reducing incentives to cheat,” adds Oppenheimer.

“There is no need to panic about ChatGPT; instead we can use this as an opportunity to modernise our thinking about academic integrity and ensure we’re using best practices in combating dishonesty in the classroom.”

Schools in New York City have taken a less high-minded approach by blocking access to the software entirely, but as Oppenheimer points out a second computer or phone can circumvent such bans.

The dangers of a knee-jerk response

Academic concerns about ChatGPT may have unintended negative consequences in the longer term.

To curb the threat of AI usage staff at the computer science department of Univesity College London altered its assessment model. Where students have previously had the option of an essay-based of skills-based assessment, the essay option no longer exists.

According to Nancy Gleason, director of the Hilary Ballon Center for Teaching and Learning at NYU Abu Dhabi, this sort of change is not always helpful.

“There is a risk that efforts to design more inclusive, flexible authentic assessments could be rolled back as part of knee-jerk administrative responses to the use of this software by students,” said Gleason in December shortly after ChatGPT launched. “If universities want to stay true to their missions of equity, inclusion and access, then we need to keep and develop these alternative assessments.”

Gleason believes that educators should now seek to incorporate chatbots into the assessment process since this generation of students is far more likely to incorporate AI assistants in their professional careers anyway.

Putting the genie back in the bottle is not an option as far as Gleason is concerned. The goal now is to rethink what the future workplace will look like and to equip students to survive in this brave new chatbot world.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.


Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?



Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse?

As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based.

But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID,  a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account.

The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.

Digital identity for web3 and metaverse

According to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse.

“By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews.

He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts.

Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions.

Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services.

In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.

AI security risks

As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice.

Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers.

To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.

“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.

This may be already a common practice on social platforms like Twitter, he said, adding:

“The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.”

Ostrovsky emphasized the importance of privacy in digital identity.

“Users need to have control over their own data and be able to decide who has access to it,” he said.

It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.

Privacy threat

Put simply, web3 is a decentralized internet powered by blockchain and token-based economics. Non-fungible crypto tokens are expected to play a key role in web3 as a medium of exchange.

Experts are concerned about the risk of fraud and a lack of safeguards on the biometric data collected for creating digital identities in web3. Big tech has in the past failed to secure personal information, and there are concerns about private firms collecting biometric data in a crypto industry that largely operates beyond regulatory oversight.

It is inevitable that comparisons have been drawn with Worldcoin’s ill-fated plan to scan the irises of one billion people in exchange for free money. The project was roundly criticized as a disaster waiting to happen for harvesting people’s biometrics in an unethical way.

Also read: Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Identity Labs’ Dan Ostrovsky believes that digital identities, as epitomized by his company’s NFID system, will play a crucial role in the development of web3 and largely address such concerns.

As the internet becomes more decentralized and open, these IDs will be used to ensure trust and security in decentralized systems.

Last year, layer two protocol Polygon also launched its Polygon ID, a decentralized and private identity service for web3 and DeFi. Polygon hopes the ID will be a useful tool in determining credit scores in web3 and decentralized finance lending.

Continue Reading


Human Immortality Just 8 Years Away, Claims ex-Google Engineer



Human Immortality Just 8 Years Away, Claims ex-Google Engineer

Here at MetaNews, we’ve aggressively covered the AI beat since the turn of the year, less so the fields of nanotechnology, genetics and robotics. But a former Google engineer claims advancements in these areas will help humans achieve immortality in just eight years.

Futurist Ray Kurzweil’s audacious claims were cited during a YouTube video on the Adagio channel. The computer scientist and inventor is known for coining the term ‘the Singularity,’ which describes the juncture at which artificial intelligence surpasses human thinking.

“Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined,” explains the video’s narrator.

“Afterwards, he predicts intelligence will radiate outward from the planet until it saturates the universe.”

Age-reversing nanobots

The celebrated technologist, who won the National Medal of Technology in 1999 and three years later made his way into the National Inventors Hall of Fame, says immortality will be achieved thanks to advancements in nanotechnology, a field he calls “the second revolution.”

These advancements, Kurzweil argues, will enable so-called age-reversing nanobots to get to work on repairing damaged cells and tissues which deteriorate as a result of the ageing process. By 2031, we’ll have the technology to facilitate everlasting life.

“Kurzweil envisions nanobots that allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains,” notes the narrator.

The inventor has been banging the nanobot drum for many years: two decades ago, in a much-cited blog post, he claimed “interlinking nanobots” would eventually “provide the ability to augment and ultimately replace the skeleton.” 

In the same article, Kurzweil suggested it would one day be “routine to have billions of nanobots (nano-scale robots) coursing through the capillaries of our brains, communicating with each other (over a wireless local area network), as well as with our biological neurons and with the Internet.”

While it’s natural to reflexively scoff at such claims, Google’s former director of engineering has form for making accurate predictions: 86% of his 147 predictions of the future have proven correct. 

Among other notable prophecies, he forecasted in 1990 that the world’s best chess player would lose to a computer by the year 2000. This one came true in 1997 when world champion Garry Kasparov fell to IBM supercomputer Deep Blue.

Flash-forward 32 years, to 2029, and an AI will be able to pass a Turing test – at least according to Kurzweil. Also known as the imitation game, the Turing test is a test of a machine’s ability to exhibit intelligence behavior indistinguishable from that of a human being.

Soothsayer extraordinaire or charismatic crank?

Kurzweil is the bright mind behind the first ever flatbed scanner, not to mention the first text-to-speech synthesizer and was once called “Edison’s rightful heir” by Inc. magazine. He’s also undeniably wacky, reportedly consuming 150 dietary supplement pills a day and taking weekly intravenous vitamin injections to boost his longevity.

While the small matter of human immortality will apparently be resolved by the end of this decade, Kurzweil “sets the date for the Singularity, representing a profound and disruptive transformation in human capability, as 2045.”

Kurzweil’s work at Google mostly centered on projects related to machine learning and language processing, two of the technologies behind the company’s AI-powered chatbot Bard. In a 2012 address at Google, Kurzweil tackled the subject ‘How to Create a Mind.’

Earlier this month, Kurzweil appeared wearing a fetching beret at the Abundance360 summit in LA, alongside the likes of Tony Robbins, Stability.AI Founder Emad Mostaque, and Co-Director of Harvard’s Center for Biology of Aging Research, David Sinclair.

The ideas espoused by the 75-year-old have been criticized by many, including maths prodigy turned anarchist terrorist Ted Kaczynski, who took aim at the inventor in his 2016 article ‘The techies’ wet dreams’.

“His whole book [2004’s Fantastic Voyage: Live Long Enough to Live Forever] reveals a man intoxicated with a vision of the future in which, as an immortal machine, he will participate in the conquest of the universe. In fact, Kurzweil and other techies are living in a fantasy world,” Kaczynski wrote.

Interestingly, the computer scientist’s prediction of coming immortality coincides with claims by Dr David McCarthy, who led a recent University of Georgia lifespan study, that men born in 1970 could potentially reach the age of 141, while women born in the same year could reach 131.

What’s your view on Kurzweil’s claims? Is human immortality within reach, many generations away, or an impossibility?

Continue Reading


Goldman Sachs Report Warns AI Could Impact 300 Million Jobs



Goldman Sachs Report Warns AI Could Impact 300 Million Jobs

Recent advancements in AI have made a big impression on the world,  with many marvelling at the extraordinary capabilities of machines to perform complex tasks with accuracy and speed.

As AI continues to evolve, however, it poses a potential threat to many jobs in various industries, raising concerns about its impact on the global workforce. A new report by Goldman Sachs predicts that generative AI may affect up to 300 million jobs.

“If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” stated the report.

There is a lot of uncertainty about generative AI’s potential, but it can produce human-like content and help machines communicate with people, which some argue could have a significant negative impact on the economy.

“Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work,” says the report.

Goldman Sachs’ estimates “suggest that generative AI could expose the equivalent of 300m full-time jobs to automation.”

Read Also: Google Launches ‘Bard’ in UK and US for Public Testing

While AI is expected to have a considerable impact on the labor market, most jobs and industries will only be partially affected by automation and are therefore more likely to be complemented rather than replaced by AI.

White collar jobs are vulnerable

The bank’s analysts estimate that around 7% of US jobs could be substituted by AI; 63% could be complemented by it while the remaining 30% would remain unaffected.

“The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%,” the report states.

Big tech companies like Microsoft, Alibaba, Baidu, and Google are investing heavily in AI. Perhaps unsurprisingly, white collar jobs are expected to be affected the most by the technology.

Goldman predicts that 25% of all tasks performed in the US and Europe could be automated by AI, based on a study that found 60% of the workforce operates in jobs that didn’t exist in 1940.

Office and administrative support positions (46%), legal posts (44%), and architecture and engineering jobs (37%) are the most vulnerable in the US to so-called AI “task replacement.”

UK seeks responsible use of AI

Meanwhile in the UK, the Sunak government has set out plans to regulate artificial intelligence with new guidelines on “responsible use,” reports the BBC.

The government says AI contributed £3.7bn ($5.6bn) to the UK economy in 2022, calling it a “technology of tomorrow.” However, concerns have been raised about the potential risks AI could pose to employment and security, as well as the possibility  it could be used for malicious purposes.

There is also concern that AI may display biases against certain groups if trained on datasets that include racist or sexist material, while the technology could be used to create and spread misinformation.

Read Also: AI Poses a Threat to Democracy, Experts Warn

Proposed new rules would apply to general-purpose AI, which includes systems such as chatbots capable of understanding and responding to human-like questions.

Issued by the Department for Science, Innovation and Technology, the latest whitepaper seeks to address concerns over privacy, human rights, and safety issues associated with AI.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved