Assessment Blog | RM

Securing the Future of Digital Assessments: AI's Role in Integrity

Written by Ed Williamson | Dec 12, 2023 4:28:06 PM

On December 7th we co-hosted a webinar with Talview, a proctoring partner of ours, to discuss malpractice in digital assessment. 

Hosted by: 

  • Ian Castledine, Head of Propositions for Assessment, RM 

  • Harjoth Singh, Customer Success Director, Talview 

The key themes of the webinar were to articulate actionable strategies to futureproof assessment programs and highlight ways that AI can be leveraged to avoid instances of malpractice.

There is also a Q&A showcasing some excellent audience questions and answers from RM's expert Ian Castledine and Talview's expert Rojen Roy.

Watch the webinar

Here is the transcript from the webinar if you would rather read it. We have also answered any questions that were not covered in the webinar. See the Q&A section. 

Would you like to discuss RMs capabilities further with RM? Get in touch (scroll down for the rest of the blog post): 

Transcription:

0:59-1:49 

The Talview and RM relationship 

Harjoth: 

 Talview and RM have been working together for around 3 years to integrate the Talview proctoring system on top of RM’s rich assessment platform to create an integrated solution. 

1:50-3:04 

How has proctoring changed over the years? 

Although proctoring has been around in many forms for years, online proctoring started in around 2020 and has grown significantly since then. Proctoring has opened up opportunities to move beyond test centres and look more flexibly into how assessments are delivered, but with confidence that regardless of where the assessment is delivered that it is being delivered with a good level of integrity. 

6:01 – 09:20 

Ian: 

Working with Talview has helped us to evolve our offer, to become more flexible and offer more flexibility to our clients. A key part of that for us has seen a shift away from reliance on bricks and mortar test centres, to enabling much more flexibility in assessment delivery, whilst ensuring the integrity of assessments.  

A key part of our strategy has been the digital assessment journey. Organizations around the world are on their own journey to make sure that the assessments and the qualifications they're providing are fit for purpose in the 21st century, and are relevant for the candidates and employers and industries that they're working within. Adopting digital assessment technology can really help that, and we built this model a couple of years ago to look at the strategy that organizations are taking to help them to unlock some of that transformation that they seek to give much more agile assessment and learning practices.  

A lot of organizations we have worked with are still very focused around paper, and so looking at the use of digital assessment can help them to improve efficiency and security of their assessments.  

Organizations might have started adopting digital. Some may be embedding it through their assessments and might be in this place where you're starting to see the use of digital becoming much more prevalent. They may be starting with certificate level qualifications and then progressing through to professional and advanced level. What we're finding is that as organizations embed digital it is unlocking their ability to drive transformation. 

Through all of that - integrity of assessment is just critical. And regardless of the sectors that our clients are working in, it's imperative that their qualifications can be trusted regardless of the mode of assessment. Individuals, employers and industry professionals - they've got to have confidence in the assessment process and in the outcomes.  

So as technology becomes more embedded throughout the assessment process, as we start to harness it’s power and also the power of artificial intelligence, we've got to continue to find ways of maintaining integrity and using the power of technology to make sure that we can lead to assessments with integrity as well.  

09:20 - 10:49 

How we're seeing AI used throughout the assessment journey to improve assessment integrity.  

We're seeing a big theme at the moment in the use of generative AI, as I'm sure all of you will be looking at in one form or other, particularly to support the assessment creation process. As we investigate more of this, as we look at more of the efficiencies that that process and technology can bring to the assessment authoring process - fundamentally that can lead to organizations being able to create larger item banks and more secure item banks. That fundamentally unlocks new capabilities, such as the ability to adopt adaptive testing, for example, which can immediately start to introduce new ways of building additional integrity into the assessment process.  

We're also continuing to look at things like plagiarism detection, which is clearly a longstanding technology, but is now being further empowered through the use of AI to detect incidents of plagiarism on coursework and on digital assessments as well.  

We're seeing massive use of AI in assessment delivery. The introduction of remote proctoring has opened up lots of new flexibility for assessment providers, but also the need to evolve security processes so that it's better aligned to suit those new modes of working.  

10:56 – 14:36 

Innovation in malpractice detection with AI 

RM, as well as working with Talview on remote proctoring, has also been thinking about other ways in which AI can support the assessment delivery process. Over the last year or two we've been working with a couple of clients using AI to detect instances of collusion by using pairwise comparison, along with lots of language processing techniques to detect suspicious practices that can be then investigated by assessment organizations.  

This work won RM an E-assessment Association Award earlier this year. Thinking particularly about how we've been able to work with organizations to look at identifying instances of malpractice like this. Now, of course, that works great for detecting incidents of malpractice that have occurred but, of course, it's always possible and ideal to reduce instances of malpractice earlier in the assessment cycle as well.  

Harjoth:  

Getting on board that digital adoption journey is quite a significant step for a lot of organizations. It's also one where people are on the fence looking at the benefits, pros and cons. Getting on that journey is one of the critical factors for adopting technology.  

What's some of the most common points of concern when getting onto that digital adoption journey and get on to that first step? It comes naturally you move from step to step and then your process is truly transformed and the end state is great, but always the first steps are hardest. 

Ian: 

Organizations have got to have confidence in their assessment processes. Individuals have got to have confidence. Candidates have got to have confidence. The digital assessment journey model that we put together was quite deliberate in recognizing that organizations need to progress, and they need to take small steps, careful steps to be able to make sure that the integrity of assessment is maintained throughout.  

The main learning point is not to try and do too much, too quickly. Make sure that what's done is done well. That will then help to build confidence. We find this leads to an acceleration of adoption as confidence increases. So organizations, particularly in the professional qualification space, that have started out on a digital assessment journey expecting it to take many years - have taken small and gentle steps to start with, but have then actually found the pace of adoption increase rapidly as confidence, and the benefits, of digital become clearer. 

16:23-20:10 

Harjoth: 

Today’s key challenges in education certification 

The primary challenge is protecting the integrity of the exams and certifications. When conducting an examination as an awarding body you need to ensure that you're able to authenticate and certify that it is done to the best of the integrity and there are no gaps anywhere in the process. You're able to put a seal of approval on that and doing it in a test centre world is great, but being able to do it in a remote environment is something which a lot of organizations want to ensure that they get absolutely right, and that's where the assessment integrity comes in as a key part of it.  

Geographical reach for dispersed learners.  

This is one of the key areas where, when you are in a bricks-and-mortar setting, your reach is confined. You're asking both your learners and your certification providers to take a day off to come into a location to conduct the process. Being able to reach them in their own spaces, being able to expand your reach further into some of those 170 odd countries, definitely becomes possible when you go digital. That's one of the areas where a lot of organizations and certification providers have focussed over the last couple of years to be able to spread their reach, assisted by remote proctoring and technologies. 

Safeguarding data security and compliance. 

One of the top challenges, along with the secure content IP. We spoke already about question banks, creating test content. That is the IP for every assessment organization.  

You cannot have your content moved out from it’s secure space. For a remote proctoring provider to be able to guarantee that test content security and to be able to guarantee that all these safeguard, compliance policies are in place it is essential to ensure that you can start that digital adoption journey. The content is yours and you want to ensure that it's safeguarded. 

Determining a strategy for going digital.  

Be prepared to take small steps and learn from your actions before moving ahead. 

Actionable insights.  

Everybody has data, everybody has insights. Being able to use it in the way that we would want to is the key. This is a challenge for many – technology helps with a lot of data but being able to segregate it, being able to act on it being able to make improvements to the process and using it will be the key area for us to all focus on.  

RM and Talview’s solution mindset and the approach means that we are able to take you step-by-step on the digital adoption journey. We can look at the insights that come in within each batch or each batch of feedback and are able to improve and realign the process. When we go out for the next set, the process is enhanced and your experience is better as an organization working with us. 

20:11 - 21:57 

AI in proctoring.  

The first step of AI in proctoring is candidate verification. You want to ensure that the person taking the assessment is correct: 

  • verify the ID 

  • verify the face 

  • verify the environment 

The ID needs to be valid, authentic and belong to the person you expect. This is typically passports, work authorization IDs or their student ID.  

A 360 degree scan of the environment ensures that you can guarantee the integrity in their examination space as a part of the assessment process.  

Simply the presence of having a proctoring solution on top of your assessment engine will deter a lot of test takers from even attempting to conduct any form of malpractice. It’s one of the biggest drivers for a lot of organizations to adopt the technology. Both organizations and learners can see the integrity with which the process is being conducted. It deters any measures that would deem the test unfair.  

One of the biggest benefits to all of this is that it is stored and shared back with the awarding bodies and the organizations to have available for any checks that need to take place after the event.  

22:00 – 25:10 

The three different types of offerings that can be plugged into the test engine with RM and Talview 

Fully automated AI enabled solution for remote proctoring.  

The primary objective of this is to deter impersonation or malpractice. You don’t need to be physically present to monitor the candidate, the monitoring happens through AI. There are multiple flags that the AI would detect and alert the candidate to if they conduct any of these activities.  

Live proctoring.  

These are for your certifications or examinations, which require the highest level of security. The ones where you actually have a human proctor monitoring the candidate and guiding the candidate through the entire process:  

  • greet the candidate,  

  • go through the rules of the examination 

  • introduce the test process 

  • validate IDs 

  • get them into the examination environment 

  • monitor the candidate during their entire examination process 

  • support the candidate with any challenges during the process.  

Organizations who want to offer a top quality experience is right alongside rigorous test security practices often prefer to use a live proctoring approach. You can have one-to-one proctoring, or multiple candidates per proctor. 

The solution that RM and Talview bring in helps the proctor. You can scale up from one-to-one ratios by using the AI flags. They act as a guiding layer to support activities and decision-making. 

Record and Review service. This is a fully automated AI enabled solution which will track and record the candidate's behaviour. The flags are captured, stored, and then reviewed by a professional proctor after the session. The proctor can decide whether a flag was valid and the candidate did indeed commit a malpractice. So you have professional proctors who are guided by instructions and guidelines, which takes care of the entire process.  

25:11-28:15 

Why choose the partnership model rather than using individual providers? 

Partnerships come in with multiple layers of protection on content and exam integrity. We spoke earlier about the test engine having a lot of the proctoring capabilities on plagiarism, etc, built in, which look at your on-screen activities and give the ability to monitor the candidate during the entire test process.  

It’s completely seamless for the candidate. They get a unified experience across multiple  languages. 

One of the most important things for us is making the system accessible and taking care of accessibility requests from users as well. This brings an overarching solution for you. 

Ian: 

The candidate experience is critical.  

The whole experience from:  

  • onboarding  

  • making sure that your system is up to the job  

  • experience on the exam day  

  • confidence that if something doesn't quite go to plan - it's managed 

  • The candidate is cared for all the way through the process 

As a digital assessment provider, making sure the candidate experience is really strong is our top priority. RM Assessment Master and Talview being integrated has been a really significant enabler for that experience.  

Harjoth: 

It’s always one of the biggest concerns for organizations going digital – will my candidate be able to manage the process? We’ve seen that a lot of people actually prefer this approach. They are greeted in the onboarding phase, helped to navigate through the different stages and then continuously supported if they have a challenge. 

28:18 – 29:55 

Pricing and support 

Pricing is competitive when you compare it to what you need to host in brick and mortar centers. The ease with which it integrates with any test platform, like we've done with the RM solution, really helps to give that unified experience to any test examination user.  

We offer 24x7 support to candidates to be able to take the examinations when they choose. They don’t have to take a day off and travel to an examination centre on a set day. They can sit the exam after work, or on a weekend if they choose. For organizations, it increases what they can offer for their learners too to be more competitive. 

30:00 – 35:25 

The future of proctoring  

This is a growing industry with new ways to monitor candidates increasing rapidly in the last few years. 

  • Adaptive proctoring practices, based on historical data of malpractices committed.  

  • Intuitive measures, such as keystroke analysis. 

  • Blockchain verification to securely store records in a tamperproof manner. 

Proctoring in the metaverse 

We have been thinking about fully proctored experiences for candidates in the metaverse (the immersive virtual world we can join through virtual reality and augmented reality headsets). This increases the difficulty for any malpractice and improves candidate privacy. 

Predictive cheating analytics 

A deeper integration between both the test engine and the proctoring solution, like the RM and Talview partnership. To be able to do real-time intervention based on test taker actions. High stake certification examinations would like to ensure that in a live scenario, if the proctor sees a candidate committing any form of malpractice they can just block the candidate and terminate them from the process. Even in an AI format that we would like to block the candidate if they are causing any behavioural flags and ensure that the test content security is ensured in a real time manner.  

Privacy-enhancing technologies. To ensure that you're doing any of the proctoring or the test accessibilities without any data storage, whilst reducing the privacy risks.  

Ethical use of proctoring. Being completely transparent in terms of the data record that's being used. Being completely transparent in the information that's being given to users. And being respectful and mindful of any of the activities that we're conducting in a proctoring manner.  

Ian: 

Flexibility in the nature of assessment. How does proctoring impact on the use of simulation assessments, for example. The different ways you can maintain security and integrity as you adopt new forms of assessment. 

Harjoth:  

Nature of assessment delivery. How do we use technology to make in-person proctoring more efficient? We’re looking at a good blend of AI with classroom based processes as customers ask us how they can improve what they already have. Administrators, learners and awarding bodies right now are looking at how they can have the best of both worlds. 

36:22 – 59:10 

Q&A 

Talview: Rojen Roy 

RM: Ian Castledine 

36:22 - 37:08 

What type of lockdown browser do you use as part of your platform?  

We have our own in-house secure browser, called Talview Secure Browser, which we are currently using for any sort of requirements where we have any customers who want to use a lockdown browser. It restricts the candidates from going anywhere else, other than the test window, unless there is any sort of pages or some applications which are widely listed for them specifically to use.  

RM is using a safe exam browser as well, so there is flexibility in that. 

37:13 – 38:15 

Could you please talk us through how academic integrity is maintained in a live exam situation? What sort of tools or settings do you have in place to ensure that the students are rigorously proctored?  

If you're looking at the live proctoring solution there will be a proctor who will be monitoring the candidates throughout the test. We have options where we can monitor the candidate with the help of a secondary camera and the screen which they are attending the test is completely recorded. The proctor will be able to see what is happening in the entire surrounding area as well as what is happening in the candidate’s screen. This helps the proctor to take a decision on malpractice or if there is any sort of possibility that the candidate is trying to reach out to someone and get help. The proctor will be able to get the help of all these functionalities that we have and detect any sort of malpractice that the candidate is trying to do. 

38:20 – 38:53 

Does the malpractice practice detection AI system work in different languages or only in English? 

Good question at the moment - in English. That's largely because the understanding about the structure and the processing of English is just better understood at the moment. There's more focus that's been put into it, so our priority has been English. Longer term we see extending that out to other non-English languages as well.  

38:54 – 39:56 

Does malpractice detection work for paper-based exams as well or is it just digital? 

At the moment our focus has been on digital because once you've got on-screen responses it's so much easier to do comparisons and the types of analysis that we're doing.  

We have started looking at paper based exams and trying to detect issues of collusion in paper. It's a very different beast. Not only do you have to start thinking about handwriting recognition, for example, as well as looking at pattern matching between the two, but you're also looking for quite different sorts of issues that are arising. That then is more about collusion that's happening within a test center, for example, whereas our focus to date has been more around the impact on a remote proctored exam.  

40:06 – 41:21 

How does the partnership between RM and Talview work and specifically how does the affect/benefit the service users? 

The candidate experience is really key for our propositions. For a candidate that's taking a test on a remotely proctored session, then it's been important to think about the end-to-end journey and what their experience is both before test day, on the test day and then afterwards. We've got a technical integration between our systems - Assessment Master (RM's assessment platform) and the Proview (Talview) system.  

We have service integration as well, so looking at the interactions that need to happen at each stage. There are some things that our awarding organizations need to deal with, there are some things that RM needs to deal with, there are some things that Talview needs to deal with - but I need the candidate to be able to have a strong experience regardless, so it's a blend of technical integration and service integration. 

41:26 – 42:09 

Does the candidate verification use and save biometric data? 

We don't save biometric data. We normally capture the ID card of the candidate and the photo of the candidate and keep it on the system for further validation in case of AI proctoring. Real time validation will be done by the proctor with the light proctoring solution. Data is stored on the system based on the customer requirement - normally for 6 months. We can remove it sooner than 6 months if needed. 

42:20 – 43:31 

We have a process that requires a candidate to be on site to participate in the computer based test but does not get a real-time result. How can we improve this? 

That is probably more of a platform question than a proctoring one. It's a really good question I'd love to have a conversation with you about that because there might be an element in there about test design.  

Interestingly we're also looking at the future of things like AI in marking to be able to get earlier feedback to students, probably more in the formative space than the high stakes at this stage. There's also a question in there about the nature of your assessment platform -  how that is set up to be able to get results back in a timely fashion.  

It's a hard question to answer in isolation. Do contact us – it would be great to discuss this in relation to your systems. 

43:55 – 44:26 

Are the proctors provided by you, or are the proctors supposed to be provided by the institution? 

We have both options. We support proctors from our side, but if you need to use proctors from your own side because of security or data privacy, etc we can support that too. 

44:30 – 45:06 

Does the proctor communicate with the student through a camera and microphone or only through chat? 

If you're looking at the live proctoring, the candidate will be monitored throughout by the proctor, so they'll be able to see the video stream coming in from the candidate side but the means of communication between the proctor and the candidate is through chat or the voice call feature. Either the candidate or the proctor will be able to initiate a call to have a discussion with each other. 

45:07 – 45:58 

In live proctoring a significant number of test takers, say 500 students, how is it ensured that all candidates start and finish the exam at the same time? 

With a live proctored session we'll know the test is scheduled in advance. It’s based on the number of candidates and based on the proctoring ratio. If the proctors are from our side, we'll make sure that the proctors are available for the number of test takers based on the proctoring ratio. If it's specifically customer-based proctors, then the specific customer will have to make sure that the proctors are available so that they'll be able to monitor the sessions. 

46:01 – 47:02 

How is candidate verification carried out for the three solutions - in person or using an automated process? 

This will be slightly different for all the three kinds of proctoring solutions we offer:  

With AI proctoring - the ID card and the photo will be captured and it will be available on the system for a further review at a later point of time.  

Record and Review - the entire session is reviewed after the session, including validating the ID, the photo they have uploaded, and the session video.  

Live proctoring solution - the candidates would only be allowed to go into the test once the proctor validates and verifies the person and the ID matches the photo. 

47:10 – 47:48 

Do the higher proctoring level you offer also include the functionality of the lower levels? For example does live include AI functionality and record the learner during the session as additional layers of security? 

As we go from AI proctoring to Record and Review and live proctoring all the functionalities that we have in the AI proctoring would be available in the live proctoring solution as well. It will just help the proctors to make a decision on if there is something going wrong from the candidate side. It is just an assistant tool that the proctors will be using while they are monitoring the candidate throughout the test.  

47:53 – 48:49 

When AI is supporting a human proctor, do you know what proportion of the AI flags are false positives? 

There will definitely be false positives. We cannot say that AI will always be 100% correct. That is why the human proctors are there - just to make a decision on if, let's say, there are multiple people detected in front of the system and AI is flagging that. Or let's say, there's only one person in front of the system and AI is flagging a multiple face detector. That is where the proctor comes into the picture to view the video stream and make a decision if it's a false positive flag or a correct flag. The proctor will also have the ability to remove those flags and this will help in generating the final report for the candidate. 

48:00 – 49:36 

What type of support is available for a non-English-speaking student? 

Currently from a support standpoint we have English based support, which is available 24x7. We don't have any other language support as of now. 

RM are going to be investing over the next few weeks. If you've got a particular focus for non-English then that's something we would be happy to talk about. We're working with some Global clients at the moment that have got that need to go beyond English, so that's something we we're going to be introducing very soon.  

49:39 – 50:05 

Does 24x7 support include support with technical issues that candidates might have during an exam? 

The 24x7 support is actually meant for the candidates. Candidates can have immediate support during the exam from the support team at any point of time. 

50:13 – 51:20 

If there is a suspicion of cheating, can a test taker be blocked temporarily during the test? 

Currently we have the option to terminate a candidate if malpractice is detected.  

Part of our onboarding processes is to work out what the appropriate response is for users. Some assessment organizations choose to pause or even terminate an assessment if there's instances of malpractice. Others will choose simply to have that recorded for later to review. There's an element of working through the specific policies and practices for each organization. 

51:26 – 53:24 

If an organization is predominantly paper based, how do you work together to make their content management system compatible with your tech? Do you recommend particular test engines? 

Clearly I'd recommend RM's - it's perfect so everybody should use it! Joking apart, if you're on that journey from paper to digital then there are lots of considerations. Not least thinking about the nature of your content and how to get it out of a paper system and into digital, there are lots of standards and specifications that will help with that.  

Ultimately it’s not a simple task - it is one that does need careful thought and consideration. You need to look at the direction that you're taking and the strategy that you're taking in digital adoption. Make sure that the engines that you're picking are suitable for that. As an example, that might be about a consideration around whether you are testing in a sessional model or on demand? That will have a bearing on the management tools and techniques you use and how it hooks into services, like remote proctoring. That will have a bearing depending on the nature of your delivery model.  

The key really is just to start thinking about what the problem you're trying to solve is and what are the stages to get there? It’s something I’m here to help you with if you wanted to talk about this further in relation to your specific challenges. 

53:36 – 54:45 

Is there a solution that can make use of AI in marking students’ responses? 

This is an emerging theme for me. I look after our propositions at RM and we're looking at the moment, and we have been for a while - the nature of AI across the whole assessment landscape.  

Marking is part of that. It's really interesting auto-marking pops up as a theme and then kind of disappears again. What I'm particularly interested in is the problem we're trying to solve and what I'm seeing increasingly seeing is that organizations are looking to use artificial intelligence not just to do the core marking but as a way of enhancing the feedback that goes back to students. 

55:12 – 55:50 

Bringing AI means, requires better and stronger connectivity. Can this increase the cost of the delivery? 

Bringing AI to the test delivery system will add up the network bandwidth requirement, but there will not be a significant increase in the requirement. In almost all regions these kind of tests will be supported and it will not have any additional cost for that. 

55:57 – 57:01 

As a percentage how much will AI decrease malpractice? 

When we are using AI, or a remote invigilated service, the candidates would not have a tendency to commit any sort of malpractice. If you are doing tests without the help of any sort of the proctoring services, there is a high probability that the candidates will be able to get help from another person or source. When we use AI, we have seen across customers, it shows that the rate of malpractice that occurs in a center, like in a non-proctored environment, is significantly higher than what we have, what we are seeing in a proctored environment. 

57:07 – 57:44 

With AI support how many candidates can be proctored by a single proctor? 

If there is a test where we have a significant number of candidates in attendance, we'll be able to make the necessary arrangement at our side to support any number of sessions. Again, it will be based on the requirement whilst making an agreement. 

57:46 – 58:22 

In terms of proctoring for a first time user, which of the remote proctoring solutions would you recommend, taking into consideration a region in Africa? 

Your remote proctoring solution depends on the stake of the exam. For any high stake exam - the Live Proctoring solution is something that we recommend to go ahead with. For the lower stakes we can go with the AI Proctored. For a mid-stake exam you can go with the Record and Review service. 

58:36 – 59:09 

What is the longest session a live proctor can do? 

It depends on the requirement. What we are seeing is that the candidate will not be able to continuously sit in a remote proctored session. What we have seen is maximum is around 5 hours. There should be a break for the candidate in-between the test. 

Questions answered after the session 

Increasingly learners are using digital tech to access content, how will AI separate memorization from understanding and analysis skills? I teach economics and a big part of assessment is assessing how they reason with the concepts. 

This is a really interesting area, and we are monitoring emerging capabilities in the market which use AI to help assess the process that the student went through - not just the answer. Wolfram have some interesting research in this area for example. A focus for RM's assessment platform is in having a range of open assessment items where students have to demonstrate their understanding and analysis. For example some of our professional qualification providers use tools such as embedded spreadsheets, where students have to put together a full report that includes narrative, spreadsheet content etc. There are examples of where such items can also be auto-marked using AI marking - although this type of technique is still in its infancy in the industry. 

In case of some essay type exams such as Corporate Law, how the malpractice detection system works, as some of the answer are expected to be identical? 

Part of our collusion detection work is in tuning the service so that it reduces false positives - such as where sections of a candidate response are expected to be identical or similar. 

What does RM offer when it comes to antiplagiarism tools in written exams and how does RM report on that? 

RM is currently establishing integration with TurnItIn, whereby candidate responses are processed prior to them being presented for human marking. 

Can we terminate the ongoing exam if indulged in Malpractices? 

With the integrated RM and Talview solution, there is an option for the termination of a candidate exam by the proctor. 

Do you also support integration with Bradge providers to certify the candidates? Ex: Credly 

RM has a wide range of APIs which can enable integration with other systems. We are also looking at fully integrated digital badging capabilities in our future roadmap. 

Could you elaborate the pricing strategy? Is it per candidate? 

The pricing strategy is quite simple, and uses a number of credits based on the length of exam and the type of remote proctoring service used. 

Can the testing body generate PDFs of the test takers' responses? Assuming the test is essay type and is conducted on Assessment Master. 

We have plans on our roadmap to make candidate responses available as PDF, although there are limitations to that given the dynamic nature of digital responses - which mean they are not always suitable for representation in a printable or PDF form. 

Is the platform suitable for STEM subjects as well as Humanities regarding exam creation/authoring stage as well as the administration stage where candidates are responding to the questions? 

Yes - the RM assessment platform is suitable for all subjects, and we are continuously looking at how we expand the range of item types to broaden curriculum coverage. We already have embedded drawing tools, maths equation editors and chemical symbol capabilities, and will be extending this further.