<img src=" https://secure.leadforensics.com/31510.png " style="display:none;" alt="Lead Forensics Pixel">

RM Results is now known as RM > find out more

Assessment Blog

9th October 2019

The human vs. machine debate in assessment marking

RM Results

It is fascinating to observe how media coverage and public opinion of education technology have evolved over the years. This recent article on the tech website Motherboard is a great example of how the discourse has moved on.

A decade ago, such investigative long-form articles might generally have taken a more fundamentalist angle, along the lines of “Do we really want robots teaching our children?” In this recent article, it is refreshing to see that the headlines are higher level and more nuanced – and despite the clear criticisms and concerns levelled at assessment technology, the questions this piece raises are worthy of consideration and debate. 

The fact is, digital assessment technologies like the essay marking algorithms examined in this article are here to stay. We have passed the tipping point at which policymakers recognise them as a necessity. As technology continues to evolve and improve in accuracy, each successive generation of both teachers and learners is increasingly trusting of technology in everyday decisions, even life-changing ones.

As the article outlines, now that edtech has become mainstream, there are more complicated considerations, which scientists and ethicists are working to resolve.

Fundamentally, at what point will we start to trust artificial intelligence more than human judgement when it comes to evaluating and grading complex or ‘subjective’ exam responses? The potential for bias will always exist in human judgement and understanding and controlling the extent to which an algorithm might exhibit its own bias is a major priority for us in the industry.

Secondly, we must continue to review and agree where the balance of power lies between automation and human. Most of us in the edtech industry would agree that the technology will always be intended to be the slave, never the master.

Assessment technology continues to develop at pace. Many of the algorithms such as those currently used and examined in the Motherboard article are continuously being improved to offer better accuracy, fairness and standardisation.

As long as there is a subjective element to assessment there will never be a “perfect” solution. When it comes to the grading of high stakes tests, the best possible approach is as much a question of policy as it is of technological advancement.

Read our whitepaper on how artificial intelligence will change assessment in the future including a five step model of how the journey to increased automation in marking might look.

Linkedin Logo Twitter Logo Facebook Logo

© 2020 RM Education Ltd. All rights reserved.