In this relatively short period of time, we have seen many professional qualification providers, Higher Education institutions and chartered Institutes switch from solely offering test centre assessments to employing tech firms with proctoring capabilities. The candidate experience has shifted radically, but we are still waiting for relevant research to tell if this has been a good or a bad thing for assessment validity. It’s certainly true that we cannot make direct comparison between an assessment taken in a centre and that same assessment taken at home under the gaze of a remote invigilator: it’s too complex. The test itself will remain the same and the outcome should be in the same grade boundary, but the candidate’s experience will undoubtedly be incomparable.
We would be wise, as an industry, to conduct some controlled studies in this area. Perhaps finding some obliging candidates willing to take a test twice in two different contexts! However, this is fraught with limitations because ultimately, we would never be able to replicate the exact conditions in the same way twice; it’s one thing we know about the core threat to exam validity and reliability. We also know that the ethical implications of observing people, especially at home, are highly contentious. Perhaps to begin with, some more qualitative research might be a good place to start? It is important that we begin examining, what is currently, anecdotal evidence.
One of the perks of online proctoring is that people without physical access to test centres can do an exam when they want, where they want and without the need for transportation. There are huge benefits for access and inclusion when candidates aren’t restricted to scheduled times and physical exam centres. It’s also worth considering that exam anxiety (a huge threat to validity) could be reduced if a candidate is able to choose their exam location.
In other cases, the software itself could bring its own problems, if it is slow to respond for example and adds additional time to the exam, or if the software is sensitive to insignificant sounds such as candidates coughing or sneezing. In these instances, candidates’ anxiety levels could be potentially heightened. This might be equivalent to being walked out of an exam hall during an exam due to a panic attack, or a nosebleed or any other personal crisis: you would feel under the spotlight in an already stressful moment.
The more cynical need for remote invigilation is to reduce the opportunity for exam malpractice. We know that awarding organisations are concerned over increases to suspicious activity taking place during exams taken from candidates’ own homes, using their own devices. Removing the option to sit assessments remotely would penalise innocent candidates who are not breaching any rules, but the concern is strong enough that some awarding bodies are considering it. One solution being considered in HE institutions, to eliminate exam collusion amongst students is 24-hour ‘open book’ exams. A less popular option was the revival of ‘on campus’ exam sittings, but students protested against such a move.
There doesn’t seem to be any doubt that as the adoption of digital assessment increases, the use of online proctoring will also increase and therefore so will the number of choices candidates have. I believe we will begin to see proctoring companies improve their offer in a way that benefits the candidate experience much more, especially as they become more informed about the potential ethical and emotional impacts.
Ultimately though, proctoring alone cannot detect exam malpractice, which prompts us to question what else is needed. Proctoring certainly has a role in preventing impersonation, time keeping and other more general invigilation activities, but it may not be enough of a deterrent for those candidates that opt for fraudulent methods like collusion. To maintain the integrity of the examination, and defend against examination fraud, scripts can be run through a malpractice detection service, like the one we have developed at RM.