Logo

Assessing students when artificial intelligence is ubiquitous

If we continue to prioritise memorisation in an age of wall-to-wall information, we send the wrong message to our students and employers. Michelle Seref offers advice on assessment that builds critical thinking skills
Michelle Seref's avatar
14 May 2026
copy
  • Top of page
  • Main text
  • More on this topic
Young black woman and older woman working on logistics
image credit: AndreyPopov/iStock.

Created in partnership with

Logo

You may also like

Assessment isn’t a finish line, it’s a learning process
5 minute read

For much of higher education’s modern history, assessment has followed a familiar formula: a midterm and a final exam, with a heavy emphasis on whether students can retain and reproduce information. That model made sense in a world where knowledge was scarce and expertise lived primarily in textbooks and lectures.

That world no longer exists.

With students’ early access to technology, they can find most information from Google, YouTube and, now, AI chatbots. The rapid rise of generative AI hasn’t made assessment obsolete, but it has made its misalignment impossible to ignore. The real question is no longer what students know, but how they think, decide, adapt and apply judgement. Yet many assessments still measure recall rather than application.

This tension has forced me – and many colleagues – to ask a fundamental question: what are we trying to assess?

Start with what employers value, then work backwards

One of the most clarifying moments for us came when we stopped debating pedagogy in the abstract and instead looked carefully at what employers consistently say they need from graduates. Across multiple industry reports, the themes for necessary business skills are clear: strategic problem-solving, collaboration, communication, ethical judgement, adaptability and digital fluency.

Rather than call them “soft” skills, we refer to them as power skills – and they are difficult to measure with traditional exams. If we truly believe these capabilities matter, they must be integrated across the curriculum and assessed via students’ demonstration. That realisation required a shift in mindset: assessment is not something that happens after learning. It must be part of how learning is structured in the first place.

From testing answers to evaluating judgement

When learning environments become more experiential – that is, grounded in real problems, data and constraints – assessment changes. In professional settings, there is rarely a single correct answer. Instead, there are trade-offs, incomplete information and evolving conditions.

That reality should be reflected in how we evaluate students. Rather than asking “Did you get it right?” we increasingly ask: “How did you approach the problem? What assumptions did you make? How did you respond when conditions changed? Can you explain and defend your decision?”

In simulations, casework and project-based assessments, students are often presented with shifting variables – a reduced budget, a new stakeholder concern, a conflicting data point. The assessment is not about perfection; it is about reasoning under uncertainty.

The role of AI in assessment design

AI has understandably generated anxiety around academic integrity. But focusing on AI detection misses a larger opportunity. The more productive question is: how do we design assessments where AI use is visible, intentional and evaluable?

In professional contexts, graduates will absolutely use AI tools. This all but mandates that we, as educators, train students how best to use AI because employers will care whether students can use these tools responsibly and efficiently. That means that assessments can – and should – ask students to:

  • compare their own analysis with AI-generated output
  • identify where AI reasoning falls short
  • explain when they would trust an AI recommendation and when they would not
  • reflect on bias, limitations and ethical implications.

In these cases, AI doesn’t undermine assessment – it reveals students’ thinking. The focus shifts from “Did you use AI?” to “How did you use it, and why?”

Feedback replaces finality

As assessments move away from high-stakes, single-moment exams, feedback becomes central. In experiential environments, students learn through iteration: propose, test, revise and reflect. Assessment, then, becomes cumulative rather than terminal. It values growth, responsiveness and metacognition – the ability to understand your own decision-making process.

This approach also changes the “sage on the stage” model, where students were expected to listen to information, to a “guide on the side” approach, where we help them navigate real-world scenarios. Rather than serving as the sole authority delivering judgements from the front of the room, faculty offer structured feedback that is more like professional supervision than academic gatekeeping.

Advice for institutions rethinking assessment

For institutions beginning this transition, a few principles matter more than any specific tool or technology:

  1. Define assessable skills explicitly. If adaptability or collaboration matters, articulate what it looks like in student work.
  2. Design assessments that evolve. Real learning happens when conditions change and students adjust.
  3. Evaluate process, not just product. Ask students to explain how they arrived at decisions.
  4. Make AI use transparent. Design assignments to assess AI engagement.
  5. Support faculty development. Assessing judgement requires different rubrics, norms and confidence from grading exams. Faculty need to understand and practise the skills they are teaching, including AI use.

Assessment as preparation for professional life

Ultimately, assessment is a signal – to students and to employers – about what we value. If we continue to prioritise memorisation in an age of ubiquitous information, we send the wrong message. AI did not create this challenge; it has accelerated a reckoning.

If our goal is to prepare graduates for a world defined by complexity, collaboration and constant change, our assessments must do more than measure what students remember. They must reveal how students think – and whether they are ready to apply that thinking beyond the classroom.

Michelle Seref serves as associate dean of undergraduate programmes and professor of business information technology in the Pamplin College of Business at Virginia Tech.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site