An academic article titled Chatting and Cheating: Ensuring Academic Integrity in the Age of ChatGPT was published this month in an educational journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to with academic honesty and plagiarism. ”.
What the readers, and indeed the peer reviewers who authorized its publication, did not know that the paper itself had been written by the controversial AI chatbot ChatGPT.
“We wanted to show that ChatGPT is writing at a very high level,” said Professor Debby Cotton, head of academic practice at Plymouth Marjon University, who pretended to be the lead author on the paper. “This is an arms race,” she said. “The technology is improving very fast and it is going to be difficult for universities to overcome it.”
Cotton, along with two colleagues from Plymouth University who also claimed to be co-authors, alerted the journal’s editors. Innovations in Education and International Teaching. But the four academics who peer-reviewed it assumed that it was written by these three academics.
For years, universities have been trying to banish the plague of essay mills that sell prewritten essays and other academic work to any student who tries to cheat the system. But now academics suspect that even essay factories are using ChatGPT, and institutions admit they are racing to catch up—and catch—anyone who passes off the popular chatbot’s work as their own.
He Observer it has spoken with several universities that say they plan to expel students who are caught using the software.
Thomas Lancaster, a computer scientist and contract cheating expert at Imperial College London, said many universities were “panicking.”
“If all we have in front of us is a written document, it is incredibly difficult to prove that it was written by a machine, because the standard of writing is usually good,” he said. “The use of English and the quality of the grammar are usually better than that of a student.”
Lancaster cautioned that the latest version of the AI model, ChatGPT-4, which launched last week, was meant to be much better and capable of typing in a way that felt “more human.”
However, he said academics can still look for clues that a student used ChatGPT. Perhaps the biggest of these is that he misunderstands academic references, a vital part of written college work, and often uses “suspicious” references or makes them up entirely.
Cotton said that to ensure that his scholarly paper misled reviewers, references had to be changed and added.
Lancaster thought that ChatGPT, which was created by San Francisco-based technology company OpenAI, would “probably do a good job with earlier assignments” in a degree course, but cautioned that it would ultimately disappoint them. “As your course becomes more specialized, it will be much more difficult to outsource the work to a machine,” she said. “I don’t think I can write your entire dissertation.”
The University of Bristol is one of several academic institutions that has issued new guidance for staff on how to detect that a student has used ChatGPT to cheat. This could lead to expulsion for repeat offenders.
Professor Kate Whittington, the university’s associate professional vice chancellor, said: “It’s not a case of one offense and you’re out. But we are very clear that we will not accept cheating because we need to maintain the standards.
after newsletter promotion

He added: “If you cheat your way to a degree, you may get a starting job, but you won’t do well and your career won’t progress the way you want.”
Irene Glendinning, director of academic integrity at Coventry University, said: “We are redoubling our efforts to get the message across to students that if they use these tools to cheat, they can withdraw.”
Anyone caught would have to receive training on the proper use of AI. If they continued to cheat, the university would expel them. “My colleagues are already finding cases and dealing with them. We do not know how many we are missing, but we are collecting cases, ”he said.
Glendinning urged academics to be vigilant about language that a student would not normally use. “If you can’t hear your student’s voice, that’s a warning,” she said. Another is content with “lots of facts and little criticism.”
She said that students who can’t spot weaknesses in what the bot produces can make mistakes. “In my computer science subject, AI tools can generate code, but they often contain bugs,” she explained. “You can’t debug a computer program unless you understand the basics of programming.”
With fees of £9,250 a year, students were only fooling themselves, Glendinning said. “They are wasting their money and their time if they don’t use the university to learn.”