AI-only proctoring is risky and doesn’t work. We’re not doing it any more

Expecting academics to effectively review potential cheating incidents is unrealistic and ineffective, says Scott McFarland

May 29, 2021
An eye stares out of a computer screen, symbolising online proctoring

When the pandemic struck a year ago and all teaching and assessment moved online overnight, a lot of makeshift solutions were understandably reached for. One of those was AI-only test proctoring.

AI was never going to be an effective substitute for human monitoring. My company, ProctorU, has always championed live monitoring by a trained proctor, who can correct behaviour proactively to prevent a situation that a faculty member has to review and make a judgement call on. This is the best way to truly prevent cheating.

Nevertheless, this approach is clearly labour intensive and faces capacity issues when everyone makes the switch at once. In this context, artificial intelligence looks like a lifeboat in the storm.

The software is trained in what to look for and flags potential test violations, which can then be checked. The problem is that when institutions are required to do the checking, they either don’t do so (only 11 per cent of the sessions we have recorded are being reviewed) or do so incorrectly.

ADVERTISEMENT

This probably should be a surprise. An instructor in a single class of 150 students would take more than nine hours to review one exam properly. No faculty member got into teaching to do this. They have more important things to do than review footage of sneezes, spilled water bottles and noisy roommates let alone escalate potential rule violations for investigation.

The consequences are obvious. We assume that if students know that their test session is being recorded, they are less likely to cheat. But students aren’t dumb. They quickly work out that it is unlikely that their teacher or teaching assistant has the time to watch all those videos before returning the scores the next morning.

ADVERTISEMENT

Perhaps more concerning to a tech company responsible for supporting education is the simple and unavoidable fact that, at this stage of its evolution, AI technology is simply not ready to be used unattended in education settings. AI can no more catch academic misconduct on its own than a hammer can build a house. While AI can tell you that someone is blowing their nose, it can’t distinguish between a test-taker with an allergy and a cheat with chemistry notes in their Kleenex. Human intent and planned deceptions are complicated, nuanced and often creative.

Indeed, even human beings aren’t always very good at discerning between normal activities and cheating on a video tape. It takes training and time – which academics don’t have. That is why, effective immediately, ProctorU will no longer expect them to try.

From now on, our stance is that every single test session will be reviewed or live proctored by one of our own proctors. We know our proctors. We train them well to do this, so we are confident in this approach. 

Guilt is still determined by the institution, as it always has been. However, when a proctor is involved, there are 85 per cent less flagged incidents for faculty members to review. And since a human being has made a judgement call based on their training and experience, faculty pay more attention to what is flagged and are more likely to make the right decision. Moreover, if a student is accused of cheating, it is only after multiple human beings have made that determination, so the checks and balances are in place.

ADVERTISEMENT

Experts will tell you that the risk-reward matrix is critical in determining when to use AI without human oversight. When there is a low risk that any error will have a serious deleterious effect on individuals, using unsupervised AI to save time may be legitimate.

An example of such an error might be sending a customer the wrong colour T-shirt. But we’re not a T-shirt company. We’re dealing with human beings and their futures and their peace of mind. We’ve all seen high-profile incidents of students being incorrectly accused of cheating by their teacher because the AI-only version of exam monitoring was not used correctly. Further incidents must be avoided.

This is why, for us, proctoring is a human activity. And, from now on, this is the only type of proctoring we will do.

Scott McFarland is the CEO of ProctorU.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT