<img src=" https://secure.leadforensics.com/31510.png " style="display:none;" alt="Lead Forensics Pixel">

RM Results is now known as RM > find out more

Assessment Blog

13th August 2020

Using Peer Assessment to deliver impactful online learning experiences

Peter Collison

By Peter Collison – Head of Formative Assessment and School Platforms at RM Results

 

"Tell me and I forget, teach me and I may remember, involve me and I understand”.

This ancient Chinese proverb, dating back to the third century BC and later adopted by Benjamin Franklin, remains the truest principle of the process of learning. While we see this doctrine in practice across many areas of the education system, there is one huge element where it is often notably missing: assessment.

During my 17 year career in the education sector working with schools, awarding organisations and higher education institutions, I have witnessed how assessment has changed from being largely focused on summative 'assessment of learning' at the end of a course or key stage, to an increasing focus on formative assessment and 'assessment for learning', helping both teachers and learners understand where to direct focus for continued progress in order to achieve the best outcomes.

One excellent example of how assessment for learning can be applied in the higher education sector was explained to me by Professor Scott Bartholomew. During his time as professor of design technology at Purdue University, he concluded that students did not take an active enough role within the assessment process and, subsequently, often failed to truly understand what comprised a ‘good’ piece of work. He went on to consider how assessment could be turned into a learning experience in itself, with students invited to participate in a peer review process.

Peer review has been a staple of formative assessment for years. The problem is, when this is conducted in the traditional format – e.g. when students are asked to swap work with another and review it on its own against a set of marking criteria – its effectiveness as a tool for learning diminishes. When a mark scheme becomes the focal point around which judgements are made, we start to lose some clarity in what ‘good’ looks like and why.

To resolve this problem, Professor Bartholomew started exploring the application of Adaptive Comparative Judgement (ACJ) for peer-to-peer assessment. ACJ is an approach based on the Law of Comparative Judgement, which states that humans are better able to make paired, comparative judgements, rather than absolute ones. Several academic studies, including this one conducted by the Irish Technology Education Research Group and the University of Limerick, have demonstrated how comparative judgement can enhance students’ learning, by honing their judgement skills at identifying criteria for success.

One can probably best explain the concept of comparative judgement by considering a visit to the optician. During an eye test, the optometrist does not present the patient with twenty different lenses in isolation, and ask them to decide which one is better. Instead, they repeatedly pair lenses together and ask patients to decide which of the two is better, until eventually, after multiple comparison rounds, they reach a highly accurate prescription. The same approach can be taken with assessment using ACJ.

Professor Bartholomew conducted a study with a cohort of 550 first-year design and technology students, using RM Compare, assessment software which incorporates ACJ. The study was designed so that half of the students in the group would use the ACJ technology to view and evaluate a set of work from a previous cohort of students, who worked on a different design brief. The other half, the control group, participated in traditional one-pair peer assessment and exchanged feedback, with all other elements of the course remaining unchanged.

The ACJ software repeatedly presented pairs of work to the students, and simply asked which one of the two pieces better met the brief. This exposed students to a considerably larger volume of work through multiple rounds of comparisons, and forced them to really hone in on what particular qualities make a piece of work ‘better’ than another, and why. This process took place before the students submitted their own design portfolios to be assessed by teachers.

When the final assessment of students own work took place the findings were remarkable. The study showed that the ACJ process had a direct, positive impact on the learning outcomes of the students. Seven of the top ten highest performers in the entire cohort had been in the ACJ group, and the average ACJ group student outperformed 66% of their peers in the control group.

The study also revealed that ACJ didn’t just benefit those students that were the highest or lowest performers. The entire ACJ group benefitted across all abilities, because all of these students had been exposed to the same breadth of work and collaborated in the evaluation process.

Feedback from students was that they would like to use ACJ for peer review at the beginning of each assignment, because it allows them to quickly identify what constitutes a ‘good’ or ‘exemplary’ piece of work, and adapt their own approach and understanding of where their work should be. They can make any necessary changes or modifications to the way they approach their work as the course progresses, before the final assessment. This type of corrective action – students identifying their own areas for development and improving on it – is something that traditional methods of formative assessment have often failed to deliver on quite as successfully.

It is clear to me from this study that ACJ software like RM Compare can add real value to students in a peer assessment context, particularly in the current situation where online tools that support remote learning and collaboration can help improve the student experience of assessment and feedback. This method of learning through evaluation has been proven to help students diagnose and understand their own learning goals and areas for development before end point summative assessments.

Download this case study to find out more about how Purdue University significantly improved student attainment using RM Compare for Peer to Peer assessment

Purdue does not endorse any specific technology.

 

 

Linkedin Logo Twitter Logo Facebook Logo

© 2020 RM Education Ltd. All rights reserved.