
Reframe assessment as part of learning

You may also like
Popular resources
Has generative artificial intelligence (GenAI) called time on university assessment? Many in higher education would say, no, it has forced a much-needed rethink of already outdated assessment models. There were calls for a greater emphasis on formative rather than summative assessment – or put another way, assessment for and as learning rather than of learning – long before ChatGPT exploded into our lives. But the ubiquity of GenAI and its ability to turn out passable essays and other work in seconds has made it imperative that educators shift the focus and design of assessment to students’ working process, their thinking, their learning journey, rather than the end product.
So, has assessment for learning’s time now come? And how can that best be combined with the continued need to evaluate learning?
This spotlight guide shares advice and insight from educators across many different disciplines on how they have aligned their assessments more closely to the learning process, made effective feedback a core component and redesigned tasks to require demonstrable thinking and decision-making.
Assessment designed for learning rather than grading, in a post-GenAI world
Formative assessment, based on consistent, low-stakes evaluations that provide students with feedback throughout the course, is consistently linked to higher levels of academic achievement and self-regulation. It’s also been shown to particularly benefit lower-performing students. It enables educators to learn from the feedback too, adjusting their teaching if students are not grasping key concepts and skills.
Here, discover ways to make assessment a part of learning, rather than relying on a high-stakes final exam, and thus shift the focus from a graded end product to the messy, human process of learning as the ultimate goal.
GenAI has destroyed grading – and it’s made me a better instructor: Instead of a six-page research paper turned in at the end of a course, Merrimack College’s Dan Sarofian-Butin suggests focusing on the real-time learning that happens between leaps of understanding and moments of doubt.
Assessment isn’t a finish line, it’s a learning process: Many assessments only measure what students already know. Philip Y. Lam of Hong Kong University of Science and Technology shows how to structure feedback-rich, iterative tasks to help students develop the skills to improve.
Assessment needs to grow up: what process and imperfection mean for higher education: Rather than railing against AI, educators could see this moment as an overdue correction and redesign assessment around what matters: the process by which humans think, revise and learn. Pontus Wärnestål of Halmstad University offers guidance.
Portfolio assessment may be the key to deeper learning: Portfolio assessment has proved to be an innovative way to track students’ progress in economics classes, making learning deeper and more meaningful, write Universidad Austral’s Belén Pagone and Cecilia Primogerio.
Make feedback a core component of assessment – not an afterthought
Used effectively, feedback is a powerful tool but too often constructive comments are overshadowed by grades. Good feedback can help students reflect, critique and adapt to improve their own performance, fostering an invaluable skill for life. But it must avoid shaming and provide clear guidance for students to work with. Find out how to provide good, “clean” feedback, and how to turn it into a dialogue with your students.
Authentic assessment design for computer programming master’s courses: Thomas Selig and Ling Wang of Xi’an Jiaotong-Liverpool University outline a four-step plan for more meaningful assessment that incorporates AI-assisted evaluation, group discussions and presentations.
Assessment and feedback as an active dialogue between tutors and students: Seven steps towards enhancing assessment and feedback as a participatory, social process that supports deeper learning, by Neil Lent, Tina Harrison and Sabine Rolle of the University of Edinburgh.
Teaching students to assess their work and why it matters beyond university: Peer feedback and self-assessment build habits that extend into professional life. They teach students to ask questions such as: “What do others see in my work?” and “What did I miss?” to improve performance, Zayed University’s Elissar Gerges explains.
Encourage, don’t shame: rethinking writing feedback: Shame around writing ability can be a real problem for new students, so let’s make sure feedback encourages their development. Royal Holloway, University of London’s Isabelle Parkinson shows how.
Assessment of thinking, judgement and process over final output
“The fundamental issue is not that students can use chatbots to generate answers. It is that we have designed assessment where answers are the only evidence we look at,” writes Nicole Brownlie from the University of Southern Queensland. Instead, here are ways to embed GenAI tools in your assessments so that students work with and demonstrate mastery of them as part of the working process, from evaluating GenAI outputs to identifying their limitations. In this way, GenAI becomes a tool by which students demonstrate understanding of the subject matter and analytical and critical thinking skills.
‘Small changes in assessment design can make thinking visible’: Many concerns about the impact of artificial intelligence on academic integrity and authorship have focused on controlling the tool rather than reconsidering the task. Here, the University of Southern Queensland’s Nicole Brownlie offers a shift to assess process, not just answers.
GenAI has not broken assessment. It has exposed it: An assessment system that rewards polished work above judgement can’t function in a GenAI-enabled world. Lucy Gill-Simmen of Royal Holloway, University of London, outlines how to build one that can.
When GenAI makes answers cheap, assessment must value judgement: Moving beyond either bans on GenAI or blind adoption requires redesigning assessment to reclaim pedagogical agency and make student judgement visible, argues Kisito F. Nzembayie of Trinity College.
When we encourage AI use, how can we still assess student thinking?: As more university educators encourage students to use GenAI, how can we ensure assessments still reward critical thinking and originality? The University of Warwick’s Isabel Fischer reflects on emerging usage patterns and shares practical design tips for meaningful, AI-inclusive assessment.
Rethinking the role of the essay
What happens to the essay when 2,000-word arguments can be generated at the click of a button? Find out how to make the essay-writing process itself visible, even while using AI in a transparent and intentional way. As LSE’s Claire Gordon and King’s College London’s Martin Compton write, “If we genuinely value the processes and learning involved, shouldn’t we be elevating and assessing the essay-writing process as much as, or alongside, the final product?”
‘Students don’t have to prove authorship of every word, they show their supervision of AI tools’: When all students are required to use generative AI for every assignment, their practice can be more rigorous, transparent and deeply reflective. Here, Royal Thimphu College’s Tiatemsu Longkumer explains a rubric.
‘Don’t be sorry, just declare it’: safeguarding the integrity of the essay: With the advent of GenAI, higher education has pronounced the essay dead. Adelaide University’s Benito Cao argues there are signs of life – and explains how to protect its integrity.
Conversations with bots: teaching students how – and when – to use GenAI for academic writing: Xi’an Jiaotong-Liverpool University’s Joseph Tinsley and Hiumin Hee explain their four-step process, which teaches students how to use GenAI tools to brainstorm ideas, understand and act on feedback and edit their essays in line with assessment rubrics.
Beyond bans: AI-resilient and creative HE assessment design: To sustain academic integrity in an AI-present learning environment, educators must redesign assessment to foreground judgement, context and creative ownership, says Jasmine Mohsen of SP Jain School of Management.
The renaissance of the essay starts here: In the age of AI, has long-form writing in higher education reached a dead end? Martin Compton of King’s College London and Claire Gordon of the London School of Economics and Political Science discuss the unique aspects of the essay and introduce a manifesto to revitalise it.
The power of storytelling
Build students’ confidence, embed learning in their memory and develop the art of reflection: storytelling can help do it all. The resources below explain how storytelling can be a useful assessment activity across disciplines.
Storytelling in STEM: connecting concepts, confidence and identity: Designing STEM classrooms that encourage students to reflect on their progress and share their stories takes intention, but the pay-off is enormous: more engaged, capable and confident learners. Mount Royal University’s Karen Ho and the University of Calgary’s Douglas B. Clark offer advice.
In a world of short attention spans, the best story wins: When students become storytellers, rather than passive recipients of knowledge, learning is more memorable. And a little help from GenAI can get them started, writes Natalie Cummins of the University of Technology Sydney.
How storytelling can turn international students into the most powerful voices in the room: Natalie Cummins covers how turning presentations into a visual storytelling task allows international students to demonstrate their learning through elements such as sound, visuals, silence and pacing rather than just language.
Authentic assessment for real-world workplace skills
Authentic assessments measure how students can apply the knowledge and skills they’ve learned to real-world situations they’ll face in future careers, rather than in a silent, closed-book exam hall. Discover how to create the right environment for students to practise lifelong skills.
From theory to judgement: using role play to assess students’ decision-making skills: Students can explain theories. But can they challenge them? Angela Christidis of the University of Exeter suggests a structured role-play approach, to help assess critical thinking, professional judgement and decision-making – skills that traditional assessments often miss.
When the business plan becomes a performance: Under conventional assessment models in entrepreneurship, continuation of a venture is rewarded and optimism reads as competence, write the University of Southampton’s Ian Solway and Jolyon Nott. But students should be required to demonstrate judgement, not projected success.
How to start reimagining assessments authentically: What does authentic assessment really look like? Through real-world tasks, meaningful application, and core knowledge and skills, it supports deeper learning and a more accurate measure of students’ understanding. Karen Bunch Franklin of Georgia Tech’s Center for 21st Century Universities provides guidance.
Discussion and dialogic exams: make assessment a conversation
Interactive orals, discussion boards and presentations all encourage students to demonstrate understanding and communication and collaboration skills they’ll need for the future. Such live, discursive approaches also make overreliance on GenAI near impossible. Find out how.
Discussion forums: the key to AI-proof assessment?: Providing a supportive learning culture for students will make them less likely to cheat – and discussion forums, with a few tweaks, may be the way to do it. Edward Palmer of Adelaide University shows how.
Case-based discussion as an authentic healthcare assessment: Case-based discussion has been well received as an assessment method. Here are other reasons to use it, along with tips on how to make it work well, by University of East Anglia academics.
When GenAI resets the assessment baseline: Generative AI is forcing university educators to raise their own bar on creativity, assessment and expectations. Here’s how Chris Jones of Regent’s University London reassessed his assignment.
How interactive orals transform assessment – and how to implement them: Interactive orals shift testing from memorisation to meaningful dialogue, reducing anxiety and building confidence in diverse learners. Popi Sotiriadou of Griffith University and Dani Logan-Fleming of Torrens University discuss how they increase inclusivity and how to implement them across disciplines.
Interactive orals offer a solution to AI over-reliance in higher education: A scenario-based assessment method that promotes authentic learning can curb over-reliance on AI and build students’ professional communication skills. Popi Sotiriadou and Dani Logan-Fleming provide their guide to interactive orals.
AI tools as assessors
Although GenAI is often blamed for disrupting assessment, educators have also been using the tools in clever ways to help students bounce their ideas around and develop their communication skills. Learn more here.
Using GenAI avatars to assess empathy: how it works in practice: Could GenAI offer a new way to assess communication skills? May Lim and Caleb Or of the Singapore Institute of Technology describe what happened when they built an avatar.
When AI asks: ‘Why?’ and facilitates critical thinking: Chatbots can be used at scale to mimic the Socratic method in university assessment and guide students to reflect on their thinking and reasoning process. Meryem Yilmaz Soylu of Georgia Tech’s Center for 21st Century Universities outlines how.
How to co-design learning and assessments with students and GenAI: Patrice Sewou of the University of Northampton offers guidance on bringing together academic expertise, students’ lived experience and GenAI’s creative capacity to co-design genuinely inclusive, collaborative and engaging learning experiences.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.