The threat of students using ChatGPT to write essays and cheat on coursework assignments is overblown according to level-headed educators.
Danny Oppenheimer, professor of psychology and decision sciences at Carnegie Mellon University, is among the academics who believe that the panic surrounding chatbots such as ChatGPT, Claude, and YouChat is not entirely warranted.
According to Oppenheimer the concerns of other academics, “are neglecting a key fact: we’ve never been able to ensure academic integrity.”
Hysteria on ChatGPT
Since the launch of OpenAI’s ChatGPT hysteria has been mounting on the potential impacts of ‘AI plagiarism,’ and the ability of schools, colleges and universities to deal with the threat.
The prevailing narrative is one of fear, as educators grapple with the reality of AI-generated content. The threat even prompted one Standford Univesity student to create GTPZero, an AI designed to detect the handiwork of other chatbots.
Although such tools may prove useful in future, questions remain about their efficacy and reliability today. Educators cannot currently rely on AI to detect AI.
Writing in Times Higher Education professor Oppenheimer explained why AI intervention isn’t the existential threat it may first appear to be.
As Oppenheimer said on Tuesday, “students could always hire others to take remote exams for them. Fraternities and sororities in the US have exam banks and answer keys for previous years’ exams stretching back decades, allowing for easy cheating on tests set by professors who reuse test questions or use assessment materials from textbook companies. Software that prevents computers accessing the web while students are taking an exam can easily be thwarted with a second computer, tablet or phone.”
As Oppenheimer sees it chatbots do make cheating easier, but they don’t significantly change the academic landscape. The problem chatbots pose is nothing new.
Mitigating the risks
A body of research indicates that the best way to reduce cheating is to reduce the motivational factors that lead to cheating. Oppenheimer cites a study by Donald McCabe which found that the most important determining factor for whether cheating occured, was students’ perception of whether other students were cheating.
Follow up investigations demonstrated that properly conveying the importance of academic integrity helped to curb dishonesty in the educational process.
“The best ways of thwarting cheating have never been focused on policing and enforcement; they have been about integrity training, creating a healthy campus culture and reducing incentives to cheat,” adds Oppenheimer.
“There is no need to panic about ChatGPT; instead we can use this as an opportunity to modernise our thinking about academic integrity and ensure we’re using best practices in combating dishonesty in the classroom.”
Schools in New York City have taken a less high-minded approach by blocking access to the software entirely, but as Oppenheimer points out a second computer or phone can circumvent such bans.
The dangers of a knee-jerk response
Academic concerns about ChatGPT may have unintended negative consequences in the longer term.
To curb the threat of AI usage staff at the computer science department of Univesity College London altered its assessment model. Where students have previously had the option of an essay-based of skills-based assessment, the essay option no longer exists.
According to Nancy Gleason, director of the Hilary Ballon Center for Teaching and Learning at NYU Abu Dhabi, this sort of change is not always helpful.
“There is a risk that efforts to design more inclusive, flexible authentic assessments could be rolled back as part of knee-jerk administrative responses to the use of this software by students,” said Gleason in December shortly after ChatGPT launched. “If universities want to stay true to their missions of equity, inclusion and access, then we need to keep and develop these alternative assessments.”
Gleason believes that educators should now seek to incorporate chatbots into the assessment process since this generation of students is far more likely to incorporate AI assistants in their professional careers anyway.
Putting the genie back in the bottle is not an option as far as Gleason is concerned. The goal now is to rethink what the future workplace will look like and to equip students to survive in this brave new chatbot world.