Logo

Assessing the GenAI process, not the output

A framework for building AI literacy in a literature-review-type assessment

,

,

24 Jan 2025
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A student studying at his desk
image credit: iStock/Drazen Zigic.

Created in partnership with

Created in partnership with

University of East Anglia

You may also like

Apply the principles of critical pedagogy to GenAI
4 minute read
The hand of Adam reaches towards a robot arm, with a buffering sign in between

Popular resources

GenAI is arguably the most disruptive technological advancement we have seen in higher education. The speed of content generation is transformational, fuelling concerns about an existential threat to teaching and learning. Universities have attempted to restrict the use of AI through guidelines and detection software. But the way in which these futile attempts have been so quickly abandoned speaks volumes.

So, what choice are we left with? Do we redouble our efforts to rein in this rampaging monster? Or do we reconceptualise the beast as a powerful vehicle that will carry us into a future of vastly expanded possibilities?

AI is part of our future, and it is a part of our students’ futures. AI hardware supplier Nvidia’s share price alone shows the mind-bending scale of investment in the technology and infrastructure. It is incumbent upon us to find a way to embrace it for our students’ sakes. 

In this resource, we outline our advice for implementing an approach that opens AI use up to our students through a strategy of assessing the process rather than outputs.

To start with, we recommend identifying learning outcomes for your students that can be achieved in collaboration with AI. For example, “demonstrating the skills required to effectively search for and analyse the existing literature” and “the ability to prepare an extended document”. Undoubtedly, these are tasks AI could complete in seconds. However, we encourage you to not give up – in fact, see that there are learning opportunities aplenty. The rapid generation of content shifts the learning dynamic, positioning students as reviewers in control of output. To do this, students need to develop across multiple areas: their AI literacy, their topic knowledge, and their critical appraisal skills. It is the learning and development in this context that we recommend is captured within the assessment tasks.

Our suggested framework for building AI literacy in a literature review-type assessment is as follows:

Step 1 – Build experience: the process of critically appraising AI output is too complex if the field is unknown or the content is unfamiliar. Students generate AI output that reviews something they are already an expert in – their favourite book, film or TV series, for example – that they then critically appraise.

Step 2 – Apply and upskill: students (working in groups) generate a literature review on an assigned topic. Students will learn with their peers how to push the AI and develop skills in effective prompting – encourage experimentation and a play-orientated approach.

Step 3 – Check attainment: students submit a summative (individual) critical appraisal of the group-generated output. This task requires a deeper understanding of the topic and provides an impetus for students to research and read into the subject. Students will recognise their personal developments in AI literacy and critical appraisal skills.     

Step 4 – Optional continuation: students prepare their final literature review (with supporting information). Allow them to decide what works for them: AI use is optional and without restriction. The supporting information document contains all prompts used and a 500-word reflective account of the process employed – it will document their development in AI literacy and give insight into their academic experiences with AI. 

These assessments lend themselves to the use of a senate scale marking scheme. Careful selection of marking criteria aligned with the desired learning outcomes for each element of the assessment is key to differentiating the expectations for each part of the process. The focus of the critical appraisal scheme is much more on the process. Criteria include: 

  • search strategy
  • exclusion and inclusion criteria
  • quality of included studies
  • synthesis and creativity.

These all speak to the process the student goes through to find and evaluate content. The focus of the final submission can then be much more output-focused, with sections assessing structure, formatting and writing style.  

We have run an assessment in this way, and it has been a hugely positive experience. Students fed back that they felt they had developed their AI literacy as well as their critical appraisal and scientific writing skills. Many expressed satisfaction that they had begun developing AI use as a future life skill. This came after we experienced a surprising amount of inertia at the start of the term. From the instructor’s perspective, the AI tools gave us access to tasks and methods to develop skills relevant to scientific writing, but in a completely different way. It was a lot of fun as well. And (as anthropologists will tell you) there’s no better way to learn than to play. 

Acknowledgement – our process was inspired by a workshop from Liz Alvey from the University of Sheffield.

Paul McDermott, Leoni Palmer and Rosemary Norton are associate professors at the University of East Anglia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site