Connect with us

AI January 5, 2023

GPTZero Claims to Detect Chatbot Content

Published

on

GPTZero Claims to Detect Chatbot Content
Who wrote it? You or an AI?

The creator of GPTZero claims his app can quickly and efficiently detect whether an essay is written by a human mind or artificial intelligence (AI), but overwhelming demand crashed the service soon after launch.

Edward Tian, the creator of GPTZero, believes his creation will assist in detecting what he calls “AI plagiarism” helping the likes of high school teachers to grade papers and assignments with confidence.

Chatbots make content creation easy

The launch of chatbots such as ChatGPT and You.com has created a headache for anyone trying to separate hard work from copied work.

According to Tian, who wrote GPTZero over the New Year period, the app should be of assistance to teachers in particular.

“The motivation here is increasing AI plagiarism,” said Tian on Twitter this week before asking, “are high school teachers going to want students using ChatGPT to write their history essays? Likely not.”

Tian wrote the program with assistance from Sreejan Kumar, a PhD candidate at Princeton University.

As MetaNews previously reported, a prompter can easily instruct a Chatbot to write a specified number of words on any topic of their choosing. A prompter can also choose to specify that this document is written in an essay or report style. It’s certainly not a stretch to imagine that a student could swiftly convert any school assignment into a chatbot prompt.

That makes the idea of a chatbot detector extremely compelling, but for now, GPTZero is a victim of its own success. The sheer demand for the app has crashed it more than once in a matter of days, despite Tian’s hosting service upping its hosting and memory.

How GPTZero detects ‘AI plagiarism’

The complete inner workings of GPTZero and its bot detection model are said to be the subject of a future academic paper. In the meantime, Tian has revealed a few details about how GPTZero works in between the down times.

Tian states that one measure used is called “perplexity” which, “is a metric of randomness, or the degree of uncertainty a model has in predicting (assigning probabilities) to a text. Lower perplexity = more AI preferred.”

Tian points out that perplexity is just one measure among many since some human written texts also have low perplexity. Another measure is “burstiness”, or the uniformity of perplexity over time. AIs have more uniform perplexity whereas human writers have variable perplexity.

Hopefully, the final research paper can shed further light on the somewhat abstract concept of perplexity.

GPTZero Claims to Detect Chatbot Content

ChatGPT explaining itself.

Can one bot detect another bot?

With GPTZero struggling to bear the load of demand, MetaNews investigated whether one chatbot could detect the work of another.

MetaNews first prompted You.com with several writing assignments, including on the subject of determining whether an essay was written by a human or a chatbot. These assignments were then submitted to ChatGPT following the prompt, “Is the following text written by a human or a chatbot?:”

In every instance, ChatGPT confidently declared, “The text you provided was written by a human. There are no obvious indicators that the text was generated by a chatbot or AI.”

The simple experiment indicates that bots do not make good detectors of bot-written content.

ChatGPT did go on to elucidate on the tell-tale signs of bot written content. These included “unnatural phrasings or sentence structures” as well as “words or phrases that seem out of place or do not fit in context.” 

At this stage in the development of AI-generated content, it seems that most bots have mostly surpassed these early limitations making their work increasingly difficult to detect. 

Until Edward Tian and his Harvard friends reveal the secrets of perplexity it seems that chatbots and high school teachers alike may remain equally perplexed by AI-created essays. 

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney.

AI

AI Could Kill Everyone, Researchers Warn MPs

Published

on

AI Could Kill Everyone, Researchers Warn MPs

Artificial Intelligence (AI) researchers have warned of the danger of superhuman AI and told MPs they should be regulated in the same way as nuclear weapons. Oxford University researchers told House of Commons Science and Technology Committee that this could risk human extinction.

Doctoral student Michael Cohen said: With superhuman AI, there is a particular risk that is of a different sort of class, which is …it could kill everyone.”

“If you imagine training a dog with treats, it will learn to pick actions that lead to getting it treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do,” added Cohen.

Also read: CNET Suspends AI After Publishing a Series of Bad Articles

The Independent reported the researchers have also warned the AI industry had already become a “literal arms race” as competition mounts to produce both commercial and military applications with the technology.

“I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the earth,” said University of Oxford professor of machine learning Michael Osborne.

“Artificial systems could become as good as good at outfoxing us geopolitically as they are in the simple environments of game,” added Osborne.

AI as dangerous as nukes

This is not the first time AI scientists have warned of the risk posed by technology. The latest warning echoes that of a thought experiment put forward by philosopher Nick Bostron about 20 years ago.

The Paperclip Maximizer problem hypotheses that a super intelligent AI would ultimately destroy humanity even if its initial goal of producing the most amount of paperclips possible – was not explicitly malicious.

“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons. AI is as comparable a danger as nuclear weapons,” said Osborne.

Elon Musk has also previously called for more government oversight saying the technology has potential to cause more harm than nuclear weapons.

Self-authored Bill to regulate self

On Wednesday, Congressman Ted Lieu introduced the first ever piece of federal legislation written by artificial intelligence.

Using the artificial language model ChatGPT, Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.”

The resulting resolution introduced was the first in the history of Congress to have been written by AI.

Earlier in the week, Lieu authored an op-ed in the New York Times calling for Congress to take action to regulate AI.

“We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future,” wrote Lieu.

Recognizing the potential positive impacts of artificial intelligence, Lieu’s resolution outlines Congress’ “responsibility to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans.”

Lieu, who is one of the few members of Congress with technology background also said that some of the harm associated with AI could be “deadly.”

“As one of the just three members of Congress with computer science degree, I am enthralled by AI and excited about the incredible ways it will continue to advance society.”

“And as a member of Congress, I am freaked out by AI, specifically AI that is left unchecked and unregulated,” he wrote.

The rise of AI and tools like ChatGPT have raised a lot of concerns with regards cheating in the classroom.

Earlier this month, New York City’s Department of Education announced a ban on ChatGPT from its schools’ devices and networks on fears this could inspire more cheating by students.

Although not entirely a new technology, ChatGPT – a chatbot created by OpenAI rose to fame on social media late last year with some describing the bot a better search engine than Google due to its conversational speaking style and coherent, topical response style.

Continue Reading

AI

CNET Suspends AI After Publishing a Series of Bad Articles

Published

on

CNET AI writes nonsense articles
CNet choose to trust the AI.

From November to January, technology website CNET published a series of financial advice articles written by artificial intelligence (AI), but an audit confirmed that the majority of these pieces contained factual errors, serious omissions, and plagiarised content.

The CNET editorial team failed to catch and correct these errors prior to publication.

CNET suspends disastrous ‘test’

Technology site CNET has admitted to publishing a string of low-quality articles written by artificial intelligence. All of the articles were misleadingly published under the byline “CNET Money Staff,” implying to casual observers that they were the work of a human hand.

On Tuesday CNET confirmed it would end the practice in what it now calls a “test,” of a an “internally designed AI engine.”

The publication offered no apology to its audience for the long list of inaccuracies and wrong financial advice it had printed since November.

“We stand by the integrity and quality of the information we provide our readers,” said CNET.

CNET’s AI engine was tasked with creating financial services explainers and over three month period proceeded to serve up wrong financial information to its readers. Some of the subjects of these articles included topics such as, “What Factors Determine Your Credit Score?” and “What Is a Credit Card Number?”

Under the guise of providing expert information, CNET published a total of 77 AI-written pieces without adequate safeguards. According to the technology outlet, the articles were outlined by human staff members before being written by the AI and were subsequently checked by their editors before final publication. 

Somewhere CNET’s system fell apart. When one of the articles was recently cited for obvious falsehoods, CNET was forced to audit the AI’s body of work. It found that 41 of the 77 pieces required significant corrections. It appears that absolutely everybody took their eye off the ball.

A compendium of nonsense

Included in the errors which made their way to curious CNET readers was the following explanation of how compound interest works.

According to the CNET AI, “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.”

This certainly sounds exciting, but in reality, an investor would receive the rather smaller sum of $300 of interest over the first year. 

In the same piece, the following explanation of loan repayments is offered: “For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

This, again, is completely wrong. Once the person who has taken the loan starts to pay back on a monthly basis, they would only owe interest on the remaining sum. In reality, there is never a year in which the person repaying the loan would ever pay back “a flat $1,000.”

Other basic mistakes and errors are littered across the 41 inaccurate AI articles. For instance, an article on credit card penalties stated the wrong dollar value of a late fee, while a report on certificates of deposit failed to include key information and facts, and an article about prequalification for credit cards was found to contain plagiarised content.

As one former CNET staffer put it, “this is so incredibly disappointing and disheartening, but it’s not surprising. What other choice do you have when you lay off all your talented and loyal writers?”

Humans make ‘mistakes’ too

The editorial line taken by CNET is that AIs, like humans, are prone to making mistakes. Even so, CNET has sought to minimize the scandal by stating that only a small number of their error-strewn articles required “substantial correction.”

It is clear then that the failure at CNET was not one simply of artificial intelligence but of human bosses failing to exercise their better judgment. The heart of the matter lies in why those mistakes were allowed to be made in the first place. 

What motivated CNET to churn out large numbers of partially inaccurate and partially plagiarised articles, and why did the company only reveal its ‘test’ after it had been caught in the act?

It’s worth noting that CNET’s parent company, Red Ventures, profits through affiliate advertising programs. When a visitor to one of their sites ultimately purchases a credit card from one of its sites, CNET makes money. Besides CNET the company owns numerous other publications including Bankrate, The Points Guy, and CreditCards.com

Given that the AI articles performed well as SEO bait, CNET’s biggest mistake may have been placing affiliate dollars before the integrity of their own journalistic teams. That, sadly, is a very human mistake indeed.

Continue Reading

AI

ChatGPT Pricing and Professional Features Revealed

Published

on

ChatGPT Pricing and Professional Features Revealed

The professional edition of ChatGPT, which is said to be faster and confer additional benefits to users, is set to cost $42 a month. OpenAI is rolling out the ‘Professional Plan’ of ChatGPT to some users with a number of benefits. Professional plan members will receive faster response times, and priority access to new features. 

Earlier this month, MetaNews reported that OpenAI has plans to move to a subscription model to “continue improving and maintaining the service.” The service is now being rolled out as some users have already received an invitation to upgrade.

ChatGPT Professional

The professional service should also be more reliable than the standard service which throttles at peak times and frequently exceeds its available capacity. Screen captures taken by users who were offered the service show that the professional plan will be “available even when demand is high.”

In an unusual step, OpenAI has refrained from making an official announcement regarding the launch. The last official reference of ChatGPT Professional appears to have come from company President Greg Brockman earlier this month.

“Working on a professional version of ChatGPT; will offer higher limits & faster performance,” said Brockman on January 11

Since then there has been no additional word from either Brockman, company CEO Sam Altman, the company website, or its social media channels.

The lack of an official announcement might leave room for skepticism about whether the rollout has actually occurred, but this would seem unwarranted given the volume and quality of evidence from user accounts.

One user named Zahid Khawaja went as far as to post a video screen-capture of his experience on ChatGPT Professional.

Given the hype wave that ChatGPT now seems to be surfing, not making an official announcement and inviting further speculation from an excited public, may be the wiser marketing strategy.

Microsoft partnership

The rollout of a subscription model for ChatGPT comes hotly on the heels of further investment from Microsoft.

On Monday Microsoft announced the ‘third phase’ of its long-term partnership with OpenAI in a ‘multibillion dollar’ deal. Microsoft failed to put a specific cost figure on the terms of their third phase contract but it has widely been reported to be in the region of $10 billion.

As MetaNews reported last week, the terms of that deal will include an aggressive cost recovery program. With such a huge investment in the company, the need for recouping that money swiftly becomes self-evident. That program will see 75% of company profits funneled back to Microsoft until their capital injection is recovered.

According to OpenAI Sam Altman, the cost of maintaining ChatGPT on an ongoing basis is said to be “eye-watering.”

Microsoft plans to fold AI solutions into its long-running services. The software firm is planning to incorporate ChatGTP functionality into the Bing search engine as the firm seeks to steal market share from Google.

Free rivals are still available

A free version of ChatGPT is still available to users, but for those who wish to look further afield, alternatives are available.

You.com is one of the major competitors in the market, combining the features of Google and ChatGPT in one platform. You.com and YouChat got a jump on the market by being one of the first chatbots to launch in the wake of ChatGPT.

One of the main differentiators to ChatGPT is that YouChat offers links to supporting evidence and research papers. On the negative side, YouChat occasionally cites ghost papers that do not actually exist. 

Another competitor AI in the field is Claude. Its parent company Anthropic was founded by former members of the OpenAI team, and it claims to offer better answers and analysis thanks to its Constitutional AI model.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe

Sign up here to get news & updates right to your inbox!

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved