What have we learnt about cheating?
RM’s dedicated research and development team, RM Studio, specialises in user focused testing and learning. To help inform the development of our Exam Malpractice Service in the high-stakes digital assessment space, we reached out directly to candidates to try to understand their reasons for committing malpractice. While we have also interviewed specialists working in awarding organisations to understand malpractice from their perspective, the insights into user behaviour are as, if not more, informative.
A key method of formative assessment used in professional qualifications, general qualifications and higher education is coursework assignments. There has been a growing risk with this type of assessment in the form of essay mill websites where candidates can buy off-the-shelf essays. These paid-for assignments are often written by fellow students or tutors globally who have specific subject knowledge and do this for a living. According to our research, one 2,500-word essay might cost a student £70 or £80 per paper. What’s even more worrying is the student can choose the required result to avoid a tutor flagging their essay as suspicious if it were either ‘too good’ or ‘too complex’ in its content.
Furthermore, there is evidence that some students are now openly discussing the use of essay mills within their peer groups, seeing it as a viable solution for completing modular based assessment. We were told that students would tell others if they had been successful in getting an assignment past tutors, and would then share the website they had used, so others could do the same.
One of the more nuanced issues with essay mill sites, is that the content is behind a pay wall so therefore cannot be accessed by plagiarism scanners whose algorithm only accesses open-source web content. This is proving problematic for assessors and tutors when trying to decipher whether a student has written the assignment themselves and is often the reason students get away with submitting paid-for assignments. One area of interest for us in RM Studio is forensic detection of narrative voice and style in individual candidates. However, there are deeply embedded ethical constraints in such examination and exploration, along with concerns about precision and accuracy in evidence that might be obtained. It’s not, after all, a scientific process like the discovery of DNA in criminal cases.
Some particularly malevolent examples of exam fraud are where candidates work in organised groups with paid leaders, within test centres or remotely, where answers are shared live with test takers. This is recognised as fraudulent behaviour and is classed as ‘organised crime’ making it a matter for the police. In some cases, candidates are known to be evading proctors by using plug-in devices to access unauthorised material and even use unauthorised devices or screens. This is much harder to prove and can require timely investigation of proctored recordings by dedicated ethics committees.
In exams, it is realistic to suspect that some candidates are committing collusion. Now we can detect and prove it through our forensic data analysis tool: RM’s Exam Malpractice Service.
It is a step in the right direction, but the battle is far from won. Exam malpractice is an industry that continues to grow and to change, to stay on top of it we need to keep innovating, and that relies heavily on understanding the reasons why people cheat.
Why do candidates cheat?
To answer this question, we must turn to the candidates and rely on their honesty. We know from interviews that during the pandemic when all teaching moved to online remote platforms, students felt they were not learning as effectively or receiving as much support as they had when teaching was primarily face-to-face. This has been cited as a fundamental reason for an increase in candidate malpractice: students say they simply didn’t have as much hands-on teaching which resulted in reduced confidence, understanding and passion for their subjects. There was also a lot of student frustration with the lack of technical competency of teachers and instructors at the beginning of the pandemic when courses were rushed online. University graduates from this period have made public complaints about the lack of access they were given to online support, resources and that non-technically savvy teachers struggled to do their jobs properly. We can see this markedly in the generation gap between the tech savvy Gen Z students and their teachers who were raised in the era before emails and the internet.
Further opportunities for evading exam rules arose once exams were being taken at home, online, often over a 24-hour window, meaning committing malpractice became easier. What’s more, students weren’t necessarily doing it on purpose. They might share answers to questions or even share written content on what were supposed to be individually taken exams. In some cases, this was done innocently. Students treated the open-book exams as an opportunity to work in study groups and collaborate by sharing resources, revision notes and other insights. Lecturers and other examiners we interviewed, felt that this was still misconduct and wanted to penalise candidates who gave similar answers to questions. They explained that students had received clear instructions about expected conduct and assessment regulations. However, without proof of misconduct, this proved complicated. Institutions need evidence to present to candidates if they are planning to withdraw results or take students to tribunals. These are some of the reasons many universities are now opting to return to the 2- 3-hour restricted exams again, since the removal of Covid rules by the UK government.
We have also seen that where exams are taken remotely or within a fixed period of time, candidates have much more freedom and access to technology. As explored in our previous blog, there are an increasing number of cases where candidates are taking advantage of scenarios such as remote invigilation to engage in malevolent practices including collusion and proxy exam sitting.
The question stands: how do we balance tests being authentic and secure? This area of work will require input from a variety of knowledge experts, which is why we need to share information in a transparent way. The exam security space is being described as an arms race. We need a collective defence force to tackle it.