While AI is developing at a rapid pace and students are all too keen to put it to use, lecturers are urgently searching for new ways to test their students’ knowledge. And that is not an easy task, as became clear last Thursday at the symposium What does AI mean for our education?, organised by the Faculty of Humanities.
During the event, lecturers and researchers discussed the impact of AI on the faculty, and Mare joined the round-table discussion on the influence of AI on assessment.
‘Let’s start on a positive note’, says discussion leader and assistant professor of computational linguistics Matthijs Westera as he opens the discussion. ‘What problems do you run into when it comes to assessment?’
‘One of the biggest problems is that I can’t tell for certain whether a student has used AI for their papers’, one lecturer responds.
‘AI use is very difficult to detect’, another adds. ‘How can we make sure that students don’t use it?’
The question is whether banning AI is still an option now that its use has become deeply ingrained in students’ practices. Westera suggests that AI can help students refine their writing and brainstorm ideas, but cautions that this assistance could undermine students’ learning objectives. ‘It’s as if we want to turn out students who are incapable of coming up with ideas themselves.’
Moreover, some lecturers warn that fraud in papers is a real risk – one that is difficult to crack down on. A member of the Board of Examiners confirms this, saying that examination boards lack the means to adequately respond to students who commit fraud using AI. ‘We do ask students about it if we suspect misuse, but if they deny it, there’s very little we can do.
To mitigate this, one lecturer has come up with a solution. ‘It’s a trial, but I currently have my students work in groups to write an assignment using chatbots. I then ask them to verify all the information they have gathered. After that, I have an open conversation with them about the results, reducing the risk of fraud.’
Another lecturer argues that there is a lack of proper instruction in research skills, which makes students more likely to resort to AI. ‘Students prefer to use AI as a research source rather than the library archives, which, let’s not forget, contain all the journals that the university pays for them to access. They don’t really know how to search for information there. It’s our duty to point this out for them.’
‘I’ve had students in my tutorials who don’t even know where the library is’, another lecturer adds.
A philosophy lecturer says that rigorous measures have been introduced in her course to prevent fraud, with all the consequences that entails: ‘We’ve banned take-home writing assignments in the first three semesters. But students are unhappy about this and feel that their writing skills are deteriorating. How do we make sure they continue to practise writing sufficiently?’
Creative work does not always have to involve writing, responds a colleague from philosophy. ‘Instead of having students write essays, you can also ask them to make a PowerPoint presentation or a short video. That also reduces the likelihood that they’ll use ChatGPT.’
Another option he proposes is to require students to write a short essay that is not graded but is mandatory in order to be allowed to take the exam. ‘They all do it, and that way you can be sure that they are engaging with the material well ahead of the exam. It works. I suspect that only about four out of three hundred students used ChatGPT.’
‘This is a fundamental shift’, notes a concerned lecturer. ‘Writing skills are very important for our faculty. We must bear in mind that this also has negative sides.’
Another lecturer wanted to speak up for students who consciously choose not to use AI. We may be under the impression that all students are using AI to cheat the system, but that’s not the case. Some students take offence at being thought of in that way.’
Besides, many of the current problems existed even before the rise of AI, one lecturer points out. For example, students are now making widespread use of AI to prepare for exams, by having bots write summaries of the material, and lecturers fear that this means students will only study the summary instead of the full course material. But these summaries already existed before, in the form of printed extracts from companies such as Joho.
‘We’re not having enough discussions about how to deal with this in the curriculum’, concludes one lecturer. ‘That takes time. The problem is that we’re not given that time.’