AI did not disturb assessment – it just made our mistakes visible

If educators don’t understand the learning processes, they also miss the reasons why students cheat, writes Margault Sacré. Here, she offers an approach to motivate and benchmark progress

Margault Sacré 's avatar
15 May 2024
bookmark plus
  • Top of page
  • Main text
  • More on this topic
bookmark plus
Illustration of a man in a rowboat
image credit: iStock/francescoch.

Created in partnership with

Created in partnership with

University of Luxembourg

You may also like

Professors, stop pretending that you never cheat
Preacher man. Academics need to drop the holier-than-thou attitude and sees themselves in their students when it comes to cheating

Assessment in higher education has long been compromised, whether due to expansion of student bodies, the overwhelming workload of faculty or the lack of pedagogical training. The methods fail to demonstrate that students have learned, evolved in their views or are thinking critically; they only show if students decoded and conformed to our evaluative standards. As lecturers, teachers, professors and educators, we are assessing neither the essential skills they need for their future careers nor their understanding of our disciplines. We are assessing their ability to meet our expectations.

This phenomenon is referred to as the standard assessment paradigm: “a predefined set of items is used to infer claims about students’ proficiency”. This type of assessment provides information about students at a singular moment, disregarding their previous knowledge. Without insight into students’ starting point, how can their progress be demonstrated? A score of 8/20 might represent significant learning for one who began with no knowledge, while a 12/20 could indicate stagnation for another. Following the standard assessment paradigm does not allow us to assess learning as a transformative process.

On top of that, when listening to students during study programme representatives meetings, I realised that their mistakes or failures are completely overlooked. The accessibility of their copies is severely constrained – limited office hours, unresponsive professors, ignored emails – compounding the challenge of obtaining constructive feedback.

Why do we still function under this paradigm? 

In my opinion, it’s because we are teaching, but we don’t understand the learning processes. Educators, professors and teachers may lack pedagogical knowledge…I don’t blame them, I blame the whole system for not making teaching a priority or for assuming that a good researcher is a good teacher. In contrast, teachers in other levels of education dedicate years to mastering pedagogy.

In the early stages of examining student strategies for exam success, researchers identified two primary approaches: superficial learning and deep learning. However, they soon recognised a third, strategic approach, where students decide according to their teacher’s preferences which approach they should adopt to pass the exam. And this approach may as well be…cheating – in a broad sense. Cheating extends beyond the traditional act of copying during exams; it includes outsourcing assignments, plagiarising from various sources and fabricating data. With the advent of generative AI, these practices have become harder to detect, slipping past our radars and even our plagiarism detectors.

The main difference between yesterday’s and today’s cheating is that we have even fewer means of detecting and proving it. In other words, trying to deal with cheating after the crime has been committed is too late. Instead of looking to detect, we should try to prevent. In my opinion, one direction is to dig into reasons for students to cheat.

Each month, I meet university teachers and researchers and I ask them why they think students cheat. Their answers fall into different themes: students are perceived as lazy, uninterested, fearful of failure, driven by the pressure of grades, burdened by excessive workloads, struggling to manage their time or disillusioned by the mismatch between expectations and what students think is useful to learn. These themes resonate with students’ own statements. Surveys (here and here) reveal that students cheat to improve their grades, to cope with excessive course demands or for lack of motivation. At least we are on the same page when it comes to cheating!

Now, let’s link these reasons to the self-determination theory, a prevalent theory to understand the mechanisms of human behaviour, including cheating. It posits that sources of motivation can be intrinsic (driven by an inherent interest) or extrinsic (driven by external rewards such as grades), and that motivation is driven by feelings of autonomy, competence and relatedness. Students’ behaviours are rooted in their motivation; if they lack internal or if they are driven by external motivation, this will be reflected in their behaviour. If all that matters to them is to obtain the diploma, their actions may align with the path of least resistance, which can include academic dishonesty.

So, maybe a reason students cheat lies in the fact that they are not driven by intrinsic motivation? Lacking interest clearly reflects the absence of internal motivation for a discipline, and rooting for grades shows both the absence of internal motivation and external pressure to do well. On the other hand, students who feel overwhelmed by work and limited by time may feel little autonomy and competency; they feel behind on subjects and assignments, and they can accumulate late work.

Of course, we cannot force our students to appreciate our discipline. But, as educators, we can indirectly influence students’ motivation by offering them opportunities to feel autonomous and competent. A powerful strategy could be to make progress visible to your students. To that end, you need different entry points, a baseline, progress points and a final assessment. Only then could you assess learning as change. On top of that, you need to make the indicators of students’ progress count; they should be accountable to participate in the intermediate assessments. Otherwise, they will not make these assessments a priority, and they will completely ignore them, even when they could get feedback from them. Several faculty colleagues reported being disappointed when they offered regular, personalised feedback to their students, who in turn failed to engage with the process. Strategies include giving credit for taking part in intermediate assessments or taking into account only the highest-rated assessments.

To conclude, the assessment paradigm in higher education is flawed, focusing more on students’ ability to meet criteria rather than their learning and critical thinking. This system fails to account for learning journeys and perpetuates cheating that has been amplified by the rise of generative AI. By fostering autonomy and competence, we can create an environment where students are motivated to learn for the sake of knowledge itself, not just for grades.

Margault Sacré is an e-learning specialist at the University of Luxembourg.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.


You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site