In today’s rapidly evolving education landscape, digital assessments are becoming a key part of the learning process worldwide. Used throughout schools, universities, training programmes and qualifications, these assessments offer unmatched flexibility and scalability. However, the convenience of digital assessments also highlights a concern around how we can protect the integrity of assessments in an online environment where malpractice might be more difficult to detect. The role of Artificial Intelligence (AI) in this is contested as the use of AI tools like Chat-GPT become more heavily adopted, making it difficult to distinguish an individual’s original work. But can we utilise AI as a powerful guardian of academic integrity? Let’s find out.
-
Biometric authentication: The first line of defence
One of the ways AI tools can safeguard digital assessments is through use of biometric authentication. AI-driven facial and voice recognition and fingerprint scans can ensure that the person taking the test is in fact the authorised individual. Using this technology can help to prevent impersonation and ensure the test-taker’s identity is verified with a high level of accuracy.
-
Behavioural analysis: Detecting anomalies
AI tools can analyse user behaviour during an assessment. This includes monitoring mouse movements, keystrokes, eye tracking, and patterns of interaction with the online assessment platform. Any irregular behaviour can trigger alerts, indicating potential cheating or misconduct. For example, if a student looks away from the screen for an extended period of time, it may suggest they are consulting unauthorised materials which can then be reviewed more closely.
-
Anti-plagiarism measures: Guarding against unoriginal work
Plagiarism remains a significant concern in the digital assessment space. AI-driven plagiarism detection tools can scan submitted answers and compare them to a vast database of academic content. They can highlight sections that closely match existing material, helping educators and providers to identify and address plagiarism more effectively.
-
Randomisation of questions: Preventing collaboration
AI tools can randomise questions and answer choices, making it more difficult for students to share answers. Each candidate will receive a unique version of the assessment, reducing the effectiveness and likelihood of collaborative cheating efforts.
-
Adaptive testing: Tailoring the challenge
AI can create adaptive tests that adjust the difficulty of questions based on the test-taker’s performance. This ensures that every test is appropriately challenging for the individual, reducing the incentive to share answers with others as well as making the assessment tailored to each individual.
-
Real-time monitoring: Prompt intervention
Incorporating AI into digital assessment enables real-time monitoring. If suspicious behaviour is detected, such as multiple logins from different locations, AI tools can issue alerts to proctors or educators, allowing for prompt intervention.
-
Data analytics: Learning from patterns
AI can analyse historical data to identify patterns of cheating or suspicious behaviour. This data-driven approach to assessment empowers educators and institutions to take preventative measures based on evidence and insights.
-
Ensuring data security: Encryption and Blockchain
To safeguard assessment data, AI can facilitate encryption, ensuring that sensitive information remains confidential. Additionally, blockchain technology can be used to securely and immutably record and verify assessment results, assuring their authenticity.
Upholding the security of assessments with AI
In the digital age, protecting the integrity of online assessments is an evolving challenge. The integration of AI capabilities makes for versatile and effective tools that can safeguard the fairness and reliability of digital assessments. AI will likely play an increasingly vital role in ensuring assessments remain a reliable measure of knowledge and skills, upholding the integrity of education in a digital world. However, as we look to implement more AI solutions across the industry, we must recognise privacy and ethical considerations and acknowledge the importance of using these in conjunction with good pedagogical practices and a considered assessment design.