Science fiction publisher Clarksworld has banned hundreds of authors after the magazine was inundated with artificial intelligence (AI) chatbot generated articles.
Ever since the release of ChatGPT last year, AI language models have become mainstream, which has caused a stir within the writing industry. Suddenly, everyone who has access to AI models like ChatGPT has the power to become a writer, with Amazon already offering over 200 books that have ChatGPT as an author or co-author according to Emily Dreibelbis.
Although this has given people the power to become writers overnight or to improve their skills with the use of AI tools, it might not be what publishers are really looking for. Clarkesworld, a sci-fi short stories magazine has recently closed submissions for short stories due to the increase in AI generated writings, which overwhelmed their team.
No joy for cheaters
The sci-fi and fantasy magazine editor Neil Clarke wrote in his blog that the number of plagiarized writings submitted during the pandemic was increasing. This number quickly sky rocketed after ChatGPT was released late last year, due to writers looking to make a “quick buck” using the AI tools.
According to Clarke, the publisher rejected more than 100 articles in January, with the number increasing to 500 in February which resulted in the publication banning the authors from future submissions and temporarily halting submissions for the time being. Fearing that these kinds of submissions will not go away anytime soon Clarkesworld, will not open submissions until they have come up with a solution to counter the problem.
“We will reopen, but have not set a date,” Clarke said on social media.
“The people causing the problem are from outside the [science fiction and fantasy] community. Largely driven in by ‘side hustle’ experts making claims of easy money with ChatGPT. They are driving this and deserve some of the disdain shown to the AI developers.”
ChatGPT launch has led to a multi-billion-dollar deal with Microsoft with the promise to revolutionize the internet search market. Its rise to fame has also sparked worries over cheating in exams and plagiarism.
“Our guidelines already state that we don’t want AI written or assisted works. They don’t care. A checkbox on a form won’t stop them, they just lie,” said Clarke.
Sniffing out a chatbot article
Disputes have occurred with image generation tools like Midjourney, Dall-E and Stable Diffusion. For example, the Colorado state art fair which awarded its first prize in the “emerging digital artists” category to an image made by Midjourney, although Jason Allen, who entered the output into the fair under his name after entering the prompt into the AI tool, was the recipient of the prize money.
Earlier this month, OpenAI released a free web-based plagiarism checker to see if machines or humans wrote text.
Although Clarke did not explain in his blog how to sniff out the cheats, he told Kotaku that it was fairly easy to identify a chatbot generated article.
“The authors we’ve banned have been very obviously submitting machine-generated text. Those works are formulaic and of poor quality,” he said.
According to Kotaku, Clarke however suspects there is a tier above these already, not quite so obvious but enough to raise suspicion.
“None are ever good enough to warrant spending more time on them,” he explained.
“It’s inevitable that that group will grow over time and become yet another problem.”
The challenge, he said, was prevalent at other publishers who are still open for submissions. Although these can be “weeded” out due to their poor quality, it remains a costly and time consuming process to sniff out the fake articles.
While ideas have been thrown around on detecting the fake work, Clarke thinks short term measures like imposing regional bans on parts of the world where most of the faked entries come from will not be the answer.