The potential of artificial intelligence in assessment feedback
Artificial intelligence (AI) has the potential to improve the way students receive assessment feedback. Elizabeth Ellis explores some of the ways in which AI can help students
You may also like
Popular resources
In higher education, artificial intelligence (AI) has the potential to improve the relationship students have with assessment feedback.
This is especially pertinent for students from widening participation backgrounds, or mature students entering higher education for the first time. Many non-traditional students have not just had complicated relationships with education; they may also have had complicated relationships with educators.
The impact of these relationships often play out in assessment and are particularly visible in how students cope with and take on board formative feedback.
Although technology can and should enhance many parts of the assessment and feedback cycle, the act of giving and receiving feedback is a particularly “human” activity. It is a dialogue in which foibles and vulnerabilities are present on the part of both the educator and the student.
- THE Campus spotlight: Designing assessments to support deeper learning
- Why you should write feedback to your students before they’ve submitted
- Guiding students to learn from each other through peer feedback
In Jisc’s 2020 Learning and Teaching Reimagined report, higher education experts called for technology to bring about new levels of authentic assessment, accessibility and inclusion, and automation.
- Authentic assessment prepares students for work, meeting employer needs and testing knowledge and skills in a realistic, contextualised and motivating way.
- Accessible assessment means inclusive design that can be used by all students, including those with additional needs, both long-term or short-term. This is particularly relevant right now.
- Automation of assessment centres around easing marking and feedback workload, with potential for using technology such as natural language processing and AI to provide alternative formats of assessment.
Each of these pushes the higher education sector towards a more continuous assessment cycle, which improves student outcomes, the student experience and reduces the chances of unfair practice.
A 2022 guide on Principles of good assessment and feedback, also by Jisc, identifies several aims that can steer the development and delivery of thoughtful, ethical artificial intelligence in higher education. Specifically:
- Develop autonomous learners by encouraging self-generated feedback, self-regulation, reflection, dialogue and peer review
- Foster a motivated learning community by involving students in decision-making and supporting staff to critique and develop their own practice.
Help make formative feedback accessible
If formative feedback can be seen as dialogue between educator and students, for many students it is thus immediately a dialogue with power and establishment.
This relationship perception can inadvertently prevent students from fully engaging in the assessment and feedback process, which is a vital part of becoming a fully independent learner, able to take ownership of their learning experience.
For educators, creating as many opportunities as possible for students to take part in feedback is vital to rebalance this. For example, providing a formative feedback window before submission dates is one way to really help students. Students can use these to refine and improve their work before submitting a final version of their work. This conversation can lower the stakes on feedback, making it less frightening or anxiety-inducing, in comparison with a student receiving feedback on the final assessed work alone.
Academic skills support should help students understand the value of feedback and encourage them to take up their educators for as much feedback as possible as an opportunity. At Arden University, the library has an excellent network of academic skills tutors who work with students in groups or on a one-to-one basis to help them build confidence in this area. Our academic professional development team works with academic staff to enhance their understanding of feedback, and understand the risks and impact of poor feedback practices.
Using AI here can help by providing a customisable, safe voice for early rounds of feedback, helping students become accustomed to such dialogue. Using a chatbot, we can allow students to choose the gender and conversational approach of the AI assistant – for instance, a male voice with an informal style. The chatbot is programmed around the learning outcomes and threshold concepts of the programme. It is given a checklist for expectations of content and trained to ask structured questions in the forms of prompts and responses, creating a form of feedback role play. Offering a choice of gender, voice and conversational approach enables students to receive feedback from a “neutral” critical friend, and potentially prepare for have difficult conversations.
Use AI to help students with ‘the fiddly bits’ of academia
An off-putting aspect of formal higher education for prospective students can be all the minutiae that is second nature for those already in the sphere. How to reference properly, for example, can be taught through study skills provision, but help can also come in the form of AI.
“Referencing bots” can help students learn to reference both in text as well as in a bibliography, supporting and augmenting the use of referencing tools. “Readiness to submit bots” can run checks on work against certain criteria and assess if they have been met and if the work is ready to be submitted. These could be as simple as prompts to ensure that the student has run their submission through any university-specific processes, such as using a plagiarism portal or checking that the submission has been proofed, spellchecked and that all the right pieces are in the right place – always a potential pitfall with portfolio submissions.
AI needs purpose
At the Playful Learning conference in July 2022, Andrew Cox of the University of Sheffield and Neil Dixon from Anglia Ruskin University led a series of interesting exercises to design a “chatbot” but challenged participants on what purpose it would serve.
AI in education comes with many of the same issues that other technologies do – issues including privacy and consent, transparency and inherent bias in the data underpinning the technology that is borne out when applied. When bringing AI into the higher education sphere, it is vitally important to keep asking the question “How does this help students?”
AI can help in a number of areas related to assessment and feedback, but crucially, we have to consider tech-solutionism, which is the desire to turn to technology as the solution to all complex real-world problems. We need to be wary of technological “solutions” arriving first, with companies and providers then searching for “problems”, real or imaginary, to pitch that solution for. Higher education may have its problems, but it isn’t a problem to be solved.
Within assessment, if one of the reasons to use AI is to improve marking times, is the “problem” one of process and capacity – hire more staff, accept fewer students, accept more realistic marking times. Or is it one of pedagogy – a transition is needed from a static, writing-based summative assessment that limits student outcomes and creates a heavy marking workload. In this case, AI could be trialled in several ways, for instance in an AI-marked multiple-choice assessment to see if that tackles poor outcomes, and the pedagogy underpinning that – if not, then introducing AI will not provide a solution.
Encourage non-traditional feedback
We are interested in alternative forms of feedback using technology that helps to connect with students. Universities could make use of audio or voice-note feedback; video or screencast feedback; or graphic feedback in the form of mind maps and sketch-notes, in which notes are “drawn” using figures, icons and arrows to represent concepts. AI can support this quest for alternative forms of feedback.
Artificial intelligence cannot and should not replace face-to-face feedback. But it can help break down barriers in our students’ ability to receive feedback and thus support the development of lifelong skills needed for constructive dialogue. All this makes AI something worth exploring with thoughtfulness and care.
Elizabeth Ellis is head of the School of Digital Education at Arden University.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.