Background
The fraudster is hiding in your pocket: how should the university deal with AI? (part 1)
How should the university deal with AI? In this two-part series, Mare sets out to find the answer. In part one: while policy lags behind, both students and lecturers struggle with the limits of acceptable use. ‘With numbers like these, detecting fraud is simply infeasible.’
Vincent Bongers, Lisa Boshuizen en Sebastiaan van Loosbroek
Monday 20 April 2026
Illustration Pieter Brouwer

I.    LAST RESORT

‘During a lecture, a fellow student gave ChatGPT a rather telling prompt’, says Wisse Versteeg, a master’s student in International Relations. ‘The prompt was: “The professor has just explained something, what should I ask now?” At that point, I thought to myself: what are we actually doing here?’

‘For me, AI is a last resort’, says a first-year student of Science for Sustainable Societies. ‘I always feel rubbish when I use it. But when I’m pressed for time, I get stressed and think: shit, I wish I’d started sooner. It’s very useful, but you have to use it in moderation. When you put it like that, it almost sounds like a kind of drug.’

For Ryan, a bachelor’s student in Data Science and AI, time pressure is also a key factor when it comes to using these apps.  ‘The closer the deadline gets, the more likely I am to use AI’, he says. ‘I turn to it when I lack the time to earn a good mark on my own. 

The use of generative large language models such as ChatGPT, Claude, Copilot and Gemini has become common practice among students. Still, there are large differences in how they use these apps to ease their studies. Some struggle with the ethical questions raised by the AI revolution: does it constitute fraud? Am I learning enough during my studies if I use the apps? Have I become (too) dependent on AI?

Meanwhile, some lecturers are combating the technology by introducing very strict anti-AI rules and having students promise not to use chatbots (see text box below), and the university is trying to draw up an overarching policy. However, the University Council recently issued a negative advice on that plan. The reason: the Executive Board’s vision on AI was not clear. 

The faculties are also developing their own guidelines. For instance, the Law School has drawn up a document outlining four scenarios for the use of AI in written assignments that students work on at home. In each scenario, students must account for how they have used AI.  

The Science for Sustainable Societies programme is taught in English, says the first-year student. When writing assignments, she uses ChatGPT to improve her phrasing. ‘It’s difficult to write papers in professional English. That can make you feel insecure.’ In her opinion, using AI in this way is not fraud. ‘I still do the research myself.’

She also uses the app to have concepts clearly explained to her in Dutch and talks to it as if it were a person. ‘I’ll say something like: “I have to write a paper and it needs to include these points. What’s the best way to go about it?”’

II.    CLAUDE AS A HELPFUL CO-READER

‘I don’t use the AI tool Claude to write essays’, says a master’s student in International Relations who often enlists the help of AI in his studies. ‘I draft an outline myself and then run the text through Claude. He’s sort of like a thesis supervisor who reads along.’

‘In secondary school, your dad would check your paper. Does that mean it’s no longer your own work?’

If the app suggests a better wording, the temptation to copy it is strong. ‘I try to rewrite it myself, but the sentence Claude suggests still often serves as the basis. It’s hard to break away from that.’

Ethically speaking, he does not think he is crossing the line. ‘In secondary school, your dad would check your profile paper. Does that mean it’s no longer your own work? I feel like it’s still my own work.’

That said, he does understand the criticism of AI use. ‘If you haven’t written something entirely yourself, you could also see it as a form of plagiarism.’ But helpful tools are nothing new, he adds. ‘In the past, you used to buy course summaries; now Claude writes them for you.’

Claude is also useful for drafting mock exams. ‘If you ask: “How do I turn answers that get you a six into answers that get you an eight?”, you get some pretty useful advice.’

III.    FEWER SKILLS, LAZIER READING

Nevertheless, he also believes it is important not to leave too much of the work to the apps. ‘I want to learn from my studies. Perhaps you can get away with handing in an AI-generated essay, but then you’re missing out on essential knowledge.’

He acknowledges that he’s not practising certain things enough at the moment. ‘I can hardly imagine still doing tedious tasks such as compiling bibliographies myself. You also become somewhat of a lazy reader. I can still identify the main points in texts myself, but that skill isn’t improving.’

Versteeg refuses to use AI, but sees that the vast majority of his fellow students do. In tutorials, he notices that many of them are not well prepared. ‘That was already the case before the arrival of ChatGPT, but the app makes it easier to get away with it. It’s a shame; I’d much rather hear what you think than what your chatbot tells you.’

Some fellow students feel ashamed of using AI, whilst others are very enthusiastic, he says. ‘I find it difficult to judge, but we need clearer agreements on what is and isn’t allowed.’

 

Illustration Pieter Brouwer

‘I believe it’s my own responsibility whether I use AI or not’, says the Science for Sustainable Societies student. ‘Some lecturers engage in discussions with students about AI, and the course syllabuses also state whether and how you’re allowed to use it.’

At the same time, she struggles with its effects. ‘It’s been implemented far too quickly and without sufficient thought. I tutor primary school children and even they sometimes say: “That’s what ChatGPT is for.” Then my response is: “No, it’s really bad for the environment and it’s very important that you think for yourself.”’

IV.    LOTS OF UNDETECTED MISUSE

The students who have used AI for their papers say they have never been called out by lecturers for doing so. ‘I’ve never been caught’, says student Ryan. ‘The aim used to be to try to arrive at the best solution; now AI itself can make sure the result doesn’t look like it was generated by AI. When I use it, I’ve already created a code or a text myself and then I have the app build on that. That’s why they never find out.’

And yet, examination boards are seeing a shift. ‘Many cases relate to the unfair use of AI’, says Alexandre Afonso, chair of the Public Administration Science Board of Examiners. In 2022–2023, there were only twelve cases of fraud, ranging from plagiarism to the unauthorised use of AI. In 2024-25, there were more than sixty.

‘This academic year, the count is already at forty. Twelve of these are reports of plagiarism; the other cases will mainly involve the unauthorised use of AI tools.’

Several examination boards within the Faculty of Humanities are also seeing a ‘clear rise in the number of reported cases of fraud involving AI’, according to the official secretaries in a joint e-mail. More sanctions are also being imposed when unauthorised use of AI is established.

Although the number of reports has not increased everywhere, the boards do observe that the nature of the fraud has shifted from traditional plagiarism to situations in which texts appear to be original but contain incorrect or fabricated sources and/or quotations. This makes it much harder to detect tampering.

Moreover, it creates a lot of extra work for lecturers, says Jan Robbe, chair of the Law School’s Board of Examiners. ‘It requires a careful check of source references, or a meeting with the student, to establish that unauthorised use of AI has indeed taken place – both of which are time-consuming.’

‘Rules are lacking. Everyone was just looking at each other and wondering: what should we do?’

At Political Science, only one of the 21 cases in the 2022–2023 academic year involved AI. The rest concerned traditional plagiarism, says Floris Mansvelt Beck, chair of the Board of Examiners. The following year, there were thirteen cases, eight of which could not be proven. Last academic year, there were fifteen reports, nine of which could not be proven. ‘These often include cases involving AI.’

At the Leiden Institute of Advanced Computer Science, most cases of fraud involve the unauthorised use of AI, says Suzan Verberne, chair of the Board of Examiners. However, the total number of cases has not increased. ‘Other forms of fraud have decreased. Students used to copy each other’s work.’ 

Bram Ieven, chair of the International Studies Board of Examiners, also sees no clear increase, but he has noticed that first-year students in particular are tempted to use chatbots without the lecturer’s permission. According to him, misuse of AI can be detected in two ways: the style in which the text is written, and the use of false sources.  

‘There isn’t always conclusive evidence for the former, but strictly speaking, that isn’t necessary to invalidate an essay. A strong suspicion can also be sufficient grounds, although we are cautious about this. Citing false sources can be checked, but that requires much more work from lecturers. If it turns out that a source doesn’t exist, we ask the student to come in for a meeting.’

During such a meeting, the board explains that through the unauthorised use of AI, students do not build the skills they need. ‘I feel that these meetings are effective. Around half admit to unauthorised use. Only a very small group continues to deny it vehemently.’

V.    WHERE ARE THE RULES?

‘A large proportion of AI use goes undetected’, says Afonso. ‘Because we have so many students, it’s simply infeasible for most lecturers to check all sources.’

‘There is a greater risk that many cases remain undetected compared to other forms of fraud’, says Verberne. ‘We recently invalidated a thesis due to excessive AI use. But of course, we don’t know what else that student has used AI for.’

‘It’s making us all less intelligent, when it could also be making us smarter’

Plagiarism used to be checked using the Turnitin programme, where percentages indicated how much of the document matched external sources. ‘That no longer has any value’, says Verberne. ‘It’s actually more suspicious if it shows a very low plagiarism percentage. If that’s the case, it’s likely been generated by AI, because it doesn’t overlap with anything.’

The threshold for committing fraud has become much lower, says Mansvelt Beck. ‘In the past, when a student panicked or had bad intentions, they had to copy text from books or get help from others. That takes far more effort than asking ChatGPT, which is right there in your pocket.’

This is a cause for concern with regard to the quality of education, the chairs agree. ‘We need to better explain to our students why it’s important to learn to write for themselves, and that we want to provide them with useful feedback. Once they understand that, they will be less inclined to turn to AI’, Ieven hopes.

‘They don’t learn if they don’t do it themselves’, says Verberne. ‘It’s making us all less intelligent, when it could also be making us smarter.’ She advocates a change in the learning process in which ‘students programme and write when necessary’ and AI is only used as a tool when it ‘doesn’t replace cognitive tasks’. 

But that requires clear rules and guidelines from the university, all the chairs agree. ‘Those are currently lacking’, says Afonso. ‘For a long time, everyone within the university was just looking at each other and wondering: what should we do about this problem? Who will come up with a solution? And of course, nothing happens that way.’ 

> This is the first part of a two-part series on AI at the university. Next week: how do lecturers view the future of university education?

THIS LECTURER HAS HIS STUDENTS PROMISE NEVER TO USE AI

Assistant professor William Michael Schmidli has his students sign an integrity statement in which they renounce AI. ‘“Then I’ll just find another lecturer”, one of them replied.’ 

Walking into lecture room 1.18 in the Lipsius building at a quarter past nine on Wednesday morning, you get the feeling that the year 2000 has yet to begin. There are no laptops in sight. The master’s students attending the course ‘Arsenal of Democracy? The US and the World since 1945’ taught by American assistant professor William Michael Schmidli, take notes with pen and paper.

Not only does Schmidli ban screens, but also the use of AI. He even has his master’s students sign an integrity statement in which they promise to do all the coursework themselves. 

Not everyone is pleased with this. ‘A bachelor’s student recently asked me to supervise her thesis’, the lecturer explains during the lecture. ‘When I told her she wasn’t allowed to use AI, her reaction was: “Then I’ll just find someone else.”’

He recently caught a student using AI. The student in question thought the AI ban was ‘foolish, because everyone uses it anyway’, Schmidli wrote in a piece on Substack.

However, the lecturer remains firmly opposed. ‘Summarising texts, creating an overview of the course material and coming up with ideas are essential for learning independently.’

Student Wouter van der Hoff is ‘generally opposed to AI’, but sees some grey areas. ‘My girlfriend is studying psychology and had to do a lot of coding for her thesis, but she doesn’t have that skill herself. AI, on the other hand, is very good at that. I don’t know how to feel about that.’

‘First, I’d like to say that I haven’t used AI for this course, nor for any others’, responds Billie van Leeuwen. ‘However, now I’m faced with a dilemma. For this course, I’m doing archive research on handwritten letters that are impossible for me to decipher. AI could do that for me.’

That is a tricky one, says Schmidli. ‘AI can certainly be useful. In this case, you’re still doing the real work, but ideally, you would learn to read the handwriting yourself.’ 

Without AI, you do gain more skills, Van Leeuwen admits. ‘But the other day I saw an internship position that required two years of AI experience. That made me worry for a moment: am I going to fall behind?’

‘Perhaps’, fellow student Leon Kleinveld suggests, ‘you should use AI to pretend that you’ve been using AI for years?’  

Schmidli: ‘The standard, especially in the Humanities, should be not to use AI. It’s important that we form a community where that is the norm.’

By Vincent Bongers