Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
Maybe we need a new way to approach school. I don’t think I agree with turning education into a competition where the difficulty is curved towards the most competitive creating a system that became so difficult that students need to edge each other out any way they can.
Actually caught, or caught with a “ai detection” software?
Actually caught. That’s why it’s tip of the iceberg, all the cases that were not caught.
Surprise motherfuckers. Maybe don’t give grant money to LLM snakeoil fuckers, and maybe don’t allow mass for-profit copyright violations.
“Get back in that bottle you stupid genie!”
we’re doomed
We are indeed. Not looking forward to my old age, where doctors, accountants, and engineers cheated their way into being qualified by using a glorified autocorrect.
doctors and engineer is probably much harder to cheat, because you would need to apply the knowlege hands on basis, and you would be found out and washe dout eventually. i can see fields that require alot of writing, oriignally people were hired to write thier prompts or essay pre-lawyer, or whatever but they always get caught down the line.
We live in a world where this building was signed off on and built, and that was before AI, so multiple incompetent people are getting through engineering.
As for incompetent doctors there is now an agency tasked with catching them.
Three magic words - “Open Note Exam”
Students prep their own notes (usually limited to “X pages”), take them into the exam, gets to use them for answering questions.
Tests application and understanding over recall. If students AI their notes, they will be useless.
Been running my exams as open note for 3 years now - so far so good. Students are happy, I don’t have to worry about cheating, and the university remains permanently angry because they want everything to be coursework so everyone gets an AI A ^_^
If ChatGPT can effectively do the work for you, then is it really necessary to do the work? Nobody saying to go to the library and find a book instead of letting a search engine do the work for you. Education has to evolve and so does the testing. A lot of things GPT’s can’t do well. Grade on that.
The “work” that LLMs are doing here is “being educated”.
Like, when a prof says “read this book and write paper answering these questions”, they aren’t doing that because the world needs another paper written. They are inviting the student to go on a journey, one that is designed to change the person who travels that path.
Education needs to change too. Have students do something hands on.
Hands on, like engage with prior material on the subject and formulate complex ideas based on that…?
Sarcasm aside, asking students to do something in lab often requires them to have gained an understanding of the material so they can do something, an understanding they utterly lack if they use AI to do their work. Although tbf this lack of understanding in-person is really the #1 way we catch students who are using AI.
Class discussion. Live presentations with question and answer. Save papers for supplementing hands on research.
Have you seen the size of these classrooms? It’s not uncommon for lecture halls to seat 200+ students. You’re thinking that each student is going to present? Are they all going to create a presentation for each piece of info they learn? 200 presentations a day every day? Or are they each going to present one thing? What does a student do during the other 199 presentations? When does the teacher (the expert in the subject) provide any value in this learning experience?
There’s too much to learn to have people only learning by presenting.
Have you seen the cost of tuition? Hire more professors and smaller classes.
Anyways, undergrad isn’t even that important in the grand scheme of things. Let people cheat and let that show when they apply for entry level jobs or higher education. If they can be successful after cheating in undergrad, then does it even matter?
When you get to grad school and beyond is what really matters. Speaking from a US perspective.
But they can’t do grad school work, they lack undergraduate level skills because they skipped it all.
“Let them cheat”
I mean, yeah, that’s one way to go. You could say “the students who cheat are only cheating themselves” as well. And you’d be half right about that.
I see most often that there are two reasons that we see articles from professors who are waving the warning flags. First is that these students aren’t just cheating themselves. There are only so many spots available for post-grad work or jobs that require a degree. Folks who are actually putting the time into learning the material are being drowned in a sea of folks who have gotten just as far without doing so.
And the second reason I think is more important. Many of these professors have dedicated their lives to teaching their subject to the next generation. They want to help others learn. That is being compromised by a massively disruptive technology. the article linked here provides evidence of that, and therefore deserves more than just a casual “teach better! the tech isn’t going away”
hire more? alot of universities are quite stingy as they dont want to have too many tenures, they are infact trying to reduce that trend. some are also cutting back because enrollment issues in some areas.
The output is often really good, even for STEM questions about niche topics.
Not always. I teach a module where my lectures are fully coursework assessed and my god, a lot of the submissions are clearly AI. It’s super hard to prove though and I just mark the same as any other, but half-halluvinated school-grade garbage scores pretty damn low.
(edit: this is because we are trained on how to write questions AI struggles with. It makes writing exams harder, but it is possible. AI is terrible at chemistry. My personal favourite being when Google AI told me the melting point of pyrrole was about -2000C, so colder than absolute zero)
Of course it is only a tool, the same way an untrained person can not operate an excavator without causing lots of damage. I just wanted to say how impressed I often am at how good the response is.
Not from UK and also not a student, but imo this is more a school problem than the students. The teachers just do not understand how to cope with AI. With open note exam and traditional exam style questions, I would be an idiot if I do use AI.
professors were already on the bordering of using AI, when before they just use software to look at your essay and any cheating it might detect.
Oh man the BBC is surely already preparing for Adolescence: rise of the robots
If using ChatGPT for tests is cheating, I’d argue calculators are cheating for math… it’s just another tool at people’s disposal as far as I’m concerned.
calculators isnt a computer where you can search up the answers lol. its literally plug in a formula and numbers and it spits out whatever you input, it doesnt give you the answer to a question. Also many math questions are abstracts, so you have to discern the correct forumla/mathematics to use.