How will various language translation tools hold up against live human translators?
Gwyneth Toolan’s live workshop on April 17th, 2024, will examine the differences between AI translation and HI (human intelligence) translation.
Innovation Product Manager Gwyneth Toolan is running a workshop at the Cambridge Assessment Network on April 17th where she will use experience learnt through proof-of-concept work in language translation within the digital product training space and run a live experiment comparing AI translation tools with human translators from RM’s offices in Abingdon. RM Assessment has been trusted by Cambridge University Press and Assessment since 2004, currently delivering on-screen marking through RM Assessor³, the world's most widely used and innovative high stakes e‑marking platform.
How is language translation used within RM Assessment’s digital products?
Gwyneth has been leading strategy on product training in the Assessment Division including product content and user-help knowledge. Whenever our digital products update, changes happen on the user interface which we must alert our customers to; hence publishing the release notes. In our assessment world, we know customers need direction from us so our product knowledge and user-help is essential. In most apps or websites, users are alerted to changes through in-app tours, pop-ups and notifications. At RM, we are building our digital user-help content and training department. We know that AI language translation is not yet suitable for product documentation because our user help guides include screenshots and rely on highly technical language which requires user testing. We are looking at including human translation as our central solution in the short term, because we know AI isn’t yet fit-for-purpose to ensure the secure onboarding processes required by most of our users. In the long term, while AI tools continue to iterate, we intend to assess its suitability. While running the live system, we migrate all existing written content onto an authoring tool.
Why do we need a central authoring tool?
Language matters, and our written product documentation underpins our digital products: RM Assessor3 and RM Assessment Master. The authoring tool will therefore act as our central product knowledge point for all language related to our assessment products, including Json files, user-help, video tutorials and any exported product documentation. When we update our products, the authoring tool will auto-update simultaneously as we author content from within our central point. When we need to change to a new language, unless we do this via our central authoring tool, we risk inefficient duplication, losses, and inaccuracies. Not only will our authoring tool ensure we update, translate and share all content from one central location, it will futureproof all long-term digital product activity.
Why is language so important?
Language is very important for user experience. Accreditors, educators and learners will all access different aspects of our products, so consistency is essential. This is not only to avoid confusion and avoid duplication/ inefficiency, but it is also about fairness and test validity. Learners must have the same experience no matter who they are, where they live or when they take the test.
What’s the conference workshop about?
Live experiment: using AI to translate test items
How will various language translation tools hold up against live human translators?
In a post pandemic world and AI obsessed landscape, how might we make test items multilingual, in order to expand access to a global candidature while retaining test validity?
In this live experiment, we will examine on a small scale, the differences between AI translation and HI (human intelligence) translation. Drawing on experience with digital content and user help, where platforms run updates on regular release cycles, meaning things are constantly changing, Gwyneth will frame the problem of language translation in the fast-paced SaaS world. We will then be able to experiment with digital item writing, another challenging context which requires regular updates, and delegates can participate in a live experiment in this interactive workshop.
Find out more about the conference: https://www.cambridgeassessment.org.uk/events/assessment-horizons-and-principles-conference-2024/
About Gwyneth Toolan, Innovation Product Manager, RM Assessment.
Gwyneth started her career in English teaching, working internationally, then basing herself in a variety of UK state schools doing diverse roles, including Head of Sixth form. After leaving teaching, Gwyneth moved into the assessment world at Cambridge International embarking on the Postgraduate Advanced Certificate in Educational Studies: Educational Assessment at Cambridge and managing syllabuses including Sociology, Psychology and Development Studies. This breadth of educational experience led her to work in innovation and training at RM, where she now leads on product strategy for technical content and customer training and onboarding. Last year Gwyneth was leading RM’s Assessment Malpractice service in the innovation department of RM, which went on to win an e-AA award for ‘most innovative use of technology in assessment’. Gwyneth is interested in the practical implications of AI, its limitations, and the need for all technological advancement to be underpinned by human intelligence, user testing, ethics and empathy.