Logo

Rather than restrict the use of AI, let’s embrace the challenge it offers

Using the AI assessment scale, we can equip students with the skills they’ll need for the future workplace. Mike Perkins and Jasper Roe explain how

,

British University Vietnam,James Cook University Singapore
13 May 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A robot and human hand put two jigsaw puzzle pieces together
image credit: iStock/AndreyPopov.

You may also like

The potential of artificial intelligence in assessment feedback
5 minute read
Two people looking through assessment feedback

A four-step process for redesigning assessments, considering the rapid adoption of generative AI (GenAI) tools in higher education was proposed by Samuel Doherty and Steven Warburton in a recent Campus piece. Redesigning assessments in an era of GenAI is essential, and it is encouraging to see the wider recognition of GenAI’s impact on education being taken seriously. 

That said, we disagree that AI-enabled tasks should be identified as being of lesser importance in achieving the learning outcomes of any given subject or programme. Instead, educators should be sensitive to the fact that the future of work in all professional fields is likely to require at least some engagement with AI. Having a flexible framework, which regulates how GenAI tools can be used, is a more appropriate approach. Maintaining academic integrity is a concern, but it is only one element that educators must address in the GenAI era. 

Trying to maintain integrity through attempts to ban the use of AI tools, enforced through AI text detection software, is not a viable option because of the limitations inherent in this software. Alternative measures to maintain assessment security, by moving assessments to secure testing platforms or online proctoring, may not provide the desired level of security, especially considering the potential threats to higher education posed by deepfake technology, as highlighted in our recent preprint.

Rethinking the role of AI in assessments: the AI assessment scale

The AI assessment scale (AIAS), recently published in the Journal of University Teaching and Learning Practice, is a five-point scale that provides a structured approach to incorporating AI into assessments, ranging from “no AI” to “full AI”. This framework was developed from the work of Leon Furze and helps academic staff clarify exactly how students can use GenAI tools in their work, while maintaining academic integrity. Each level demonstrates an increasing level of allowed usage of GenAI in assessments, providing clear guidelines on how students can incorporate AI tools into their work in an ethical manner.

1NO AI

The assessment is completed entirely without AI assistance. This level ensures that students rely solely on their knowledge, understanding and skills.

AI must not be used at any point during the assessment.

2AI-ASSISTED IDEA GENERATION AND STRUCTURING

AI can be used in the assessment for brainstorming, creating structures and generating ideas for improving work.

No AI content is allowed in the final submission.

3AI-ASSISTED EDITING

AI can be used to make improvements to the clarity or quality of student-created work to improve the final output, but no new content can be created using AI.

AI can be used, but your original work with no AI content must be provided in an appendix.

4AI TASK COMPLETION, HUMAN EVALUATION

AI is used to complete certain elements of the task, with students providing discussion or commentary on the AI-generated content. This level requires critical engagement with AI-generated content and evaluating its output.

You will use AI to complete specified tasks in your assessment. Any AI-created content must be cited.

5FULL AI

AI should be used as a “co-pilot” in order to meet the requirements of the assessment, allowing for a collaborative approach with AI and enhancing creativity.

You may use AI throughout your assessment to support your own work and do not have to specify which content is AI-generated.

Embracing AI for enhanced learning outcomes

Rather than limiting AI integration to assessments of lesser importance, the AIAS encourages broader consideration of AI in every assessment. Although not every assessment will be suitable for a deeper integration of AI, educators should evaluate how AI can be employed by students in an ethical manner to support their learning and select an appropriate point on the scale.

Because of concerns regarding AI text detectors, we recommend that Levels 1 and 2 be used sparingly, either in no-stakes assessment or as part of supervised, in-person examinations. From Level 3 onwards, students are encouraged to use GenAI tools as part of their writing process, with an increasing role of AI as the scale progresses.

The implementation of the AIAS at British University Vietnam (BUV) has shown promising results. Following the introduction of the AIAS, we experienced a major shift in how academic staff and students perceived the use of AI in assessments. The use of GenAI tools for assessment purposes moved away from a narrow perspective of AI as plagiarism, and instead became more nuanced, with a focus on assessing how well students have demonstrated the required learning outcomes. 

We have also seen hugely creative uses of multimodal AI in both teaching and assessments, and educators encouraging the use of AI. Students are better equipped to seek guidance from their module leaders on how it can be used in an ethical manner to support their learning. We saw pass rates increase by a third and a small but notable increase in the grades achieved by students.

These findings suggest that the AIAS framework not only promotes academic integrity, but can also support student learning and success – particularly for English as an additional language (EAL) students, who comprise the majority of BUV’s student body. 

Embracing a comprehensive approach to AI integration

To effectively integrate AI into assessments, higher education institutions should adopt a comprehensive approach, going beyond an attempt to “AI-proof” or redesign individual tasks. The AI assessment scale provides a solid foundation for this process, helping educators to consider how AI can be used responsibly and ethically across a range of assessments.

As with all new technologies, we need to enable experimentation and critical examination. We must allow students to gain first-hand experience of what these tools can and cannot do, as well as their limitations and potential effects on society. We believe that attempting to restrict students from using AI technology may miss a valuable opportunity to challenge them to do more. The goal should not be to ask students to complete the same tasks with AI. Rather, it should be to assess the same learning outcomes while encouraging them to demonstrate their knowledge and skills in innovative ways, mirroring how they will operate in their future careers. 

By recognising the potential of GenAI to enhance student learning and prepare them for future work environments, institutions can encourage innovation, creativity and digital literacy, while upholding the principles of academic integrity.

Mike Perkins is the head of the Centre for Research and Innovation at British University Vietnam, and Jasper Roe is the head of the department of English and lecturer in social sciences at James Cook University Singapore.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site