How are organisations thinking about the very nature of assessment? This was a thread of discussion that emerged at various points throughout Wednesday’s panel session. In the UK, we tend to talk about the role of GCSEs, A-Levels and T-Levels when considering the purpose of assessment. This is largely because the standardisation and accountability they provide has a huge impact on the way the curriculum is delivered and how students are prepared for these one-time events. When we then think about what the future of assessment in the UK might look like, we are predisposed to focus on how we can evolve these end-of-course assessments.
However, Ian Castledine explained there are other countries around the world that are taking a different approach to shaping the future of assessment. There are examples of other jurisdictions investing in national or regional projects that focus on different parts of the learning journey.
He said: “There is an emerging theme in Australia, where state and federal governments are investing in assessment within the classroom, to provide some consistent tools to teachers for assessing student ability, enabling earlier interventions using a range of centrally developed assessment items to give good quality calibrated measures. And, of course, this is a model that others such as the Welsh government have also invested in.”
Ian goes on to note that this type of large-scale formative project is leading to more personalisation of assessment across the learning journey with techniques such as adaptive testing. We have also seen the emergence of new forms of assessment in the professional qualifications sector with a growing focus on preparing work-ready individuals, leading to traditional assessment models being challenged. What would the equivalent be for secondary schools? Is there a way that a blend of different assessment models could come together to give a fuller and more relevant way of assessing an individual's abilities?
What can the UK learn from different countries around the world? According to Ian, and RM’s research project with the International Association for Educational Assessment (IAEA) on the global digital assessment landscape, we can certainly use our knowledge of other jurisdictions to inform our thinking about how to drive change in the UK.
During his keynote at The Westminster Education Forum conference, Ian Castledine highlighted some key findings from the research that offer a view of the differing perceptions globally. These included:
Student experience is important: This is reflected across sectors and geographies and there is a perception that digital assessment can lead to an improved candidate experience. However, this isn’t universally welcomed, particularly where paper exams for subjects that involve equations or drawing have been replicated on-screen.
Digital assessment has a role to play in ensuring fairness: Giving more students fairer and consistent access to assessment is welcomed, with different tooling able to provide a more level playing field. But there is concern around introducing new inequalities. Digital poverty can lead to a situation where individuals are disadvantaged because of their lack of access to technology and could have the potential to lead to unfair disadvantage.
Key learning: “What are the edge cases that you need to think through; how can you engage stakeholders across the assessment and education community, and make sure their needs and views are heard and addressed?”
Another topic that featured in discussions at Wednesday’s conference was the importance of an assessment system that used the ‘right’ mode of assessment. Ian Castledine built on this notion by considering international experiences of the validity of different assessment modes. Evidence from organisations around the world shows that it is important that students’ skills and abilities are assessed in the most appropriate way via a blend of assessment styles.
He explained: “That is not to say that necessary checks and balances shouldn't be put in place to give confidence about that blended model of assessment. On the contrary, we see in global assessment providers some incredibly stringent models to ensure the integrity of all assessments. For example, it’s natural to be concerned about how to determine whether submitted coursework is in fact the students' own work, and there are some well-established principles for checking internally assessed grades, and standardising across schools and markers, with external sampling and moderation.
But we're also seeing success with organisations using the breadth of assessment data and evidence that they have, to link together pieces of information to identify issues. If a candidate does well in a piece of coursework but hasn't got the evidence of such strong ability in the rest of their assessed exam work, further checks could reveal whether there is an issue at play here - or indeed whether the student is just a poor performer under exam conditions. Our challenge in the UK is to consider how to build the credibility of different assessment models as we look at what a future assessment system could bring - that moves beyond the public perception that anything other than a handwritten exam is somehow dumbing down.”
And lastly, I'll consider how different organisations are blending different modes of assessment, with appropriate levels of rigour.
Following the conference, the education media reported the views of other high-profile stakeholders in the English school assessment system. While broadly positive about the prospect of digital assessment, those hoping for quick adoption of digital assessment are unlikely to have those dreams realised soon.
Concerns include widespread access to technology and how well existing exam spaces might cope with candidates using laptops, the call is for evolution rather than the wholesale junking of paper-based assessment.
RM’s experience with organisations around the world suggests that the motivation for moving to digital assessment can be a desire to provide more equitable outcomes for otherwise marginalised groups. Even so, careful testing and evaluation of new assessment mechanisms are vital but by no means insurmountable.